query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
17
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
fb12b45b035245d2d504113f04709f1c
|
Anonymity of Bitcoin Transactions An Analysis of Mixing Services
|
[
{
"docid": "bc8b40babfc2f16144cdb75b749e3a90",
"text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.",
"title": ""
}
] |
[
{
"docid": "dff035a6e773301bd13cd0b71d874861",
"text": "Over the last few years, with the immense popularity of the Kinect, there has been renewed interest in developing methods for human gesture and action recognition from 3D skeletal data. A number of approaches have been proposed to extract representative features from 3D skeletal data, most commonly hard wired geometric or bio-inspired shape context features. We propose a hierarchial dynamic framework that first extracts high level skeletal joints features and then uses the learned representation for estimating emission probability to infer action sequences. Currently gaussian mixture models are the dominant technique for modeling the emission distribution of hidden Markov models. We show that better action recognition using skeletal features can be achieved by replacing gaussian mixture models by deep neural networks that contain many layers of features to predict probability distributions over states of hidden Markov models. The framework can be easily extended to include a ergodic state to segment and recognize actions simultaneously.",
"title": ""
},
{
"docid": "b4622c9a168cd6e6f852bcc640afb4b3",
"text": "New developments in osteotomy techniques and methods of fixation have caused a revival of interest of osteotomies around the knee. The current consensus on the indications, patient selection and the factors influencing the outcome after high tibial osteotomy is presented. This paper highlights recent research aimed at joint pressure redistribution, fixation stability and bone healing that has led to improved surgical techniques and a decrease of post-operative time to full weight-bearing.",
"title": ""
},
{
"docid": "d5c72dd4b660376b122bbe71005335d7",
"text": "The effect of television violence on boys' aggression was investigated with consideration of teacher-rated characteristic aggressiveness, timing of frustration, and violence-related cues as moderators. Boys in Grades 2 and 3 (N = 396) watched violent or nonviolent TV in groups of 6, and half the groups were later exposed to a cue associated with the violent TV program. They were frustrated either before or after TV viewing. Aggression was measured by naturalistic observation during a game of floor hockey. Groups containing more characteristically high-aggressive boys showed higher aggression following violent TV plus the cue than following violent TV alone, which in turn produced more aggression than did the nonviolent TV condition. There was evidence that both the violent content and the cue may have suppressed aggression among groups composed primarily of boys low in characteristic aggressiveness. Results were interpreted in terms of current information-processing theories of media effects on aggression.",
"title": ""
},
{
"docid": "78d1a0f7a66d3533b1a00d865eeb6abd",
"text": "Motivated by a real-life problem of sharing social network data that contain sensitive personal information, we propose a novel approach to release and analyze synthetic graphs in order to protect privacy of individual relationships captured by the social network while maintaining the validity of statistical results. A case study using a version of the Enron e-mail corpus dataset demonstrates the application and usefulness of the proposed techniques in solving the challenging problem of maintaining privacy and supporting open access to network data to ensure reproducibility of existing studies and discovering new scientific insights that can be obtained by analyzing such data. We use a simple yet effective randomized response mechanism to generate synthetic networks under -edge differential privacy, and then use likelihood based inference for missing data and Markov chain Monte Carlo techniques to fit exponential-family random graph models to the generated synthetic networks.",
"title": ""
},
{
"docid": "8d25383c229a3a585d54ac71e2f22fb4",
"text": "This study aimed to determine the effects of a flipped classroom (i.e., reversal of time allotment for lecture and homework) and innovative learning activities on academic success and the satisfaction of nursing students. A quasi-experimental design was used to compare three approaches to learning: traditional lecture only (LO), lecture and lecture capture back-up (LLC), and the flipped classroom approach of lecture capture with innovative classroom activities (LCI). Examination scores were higher for the flipped classroom LCI group (M = 81.89, SD = 5.02) than for both the LLC group (M = 80.70, SD = 4.25), p = 0.003, and the LO group (M = 79.79, SD = 4.51), p < 0.001. Students were less satisfied with the flipped classroom method than with either of the other methods (p < 0.001). Blending new teaching technologies with interactive classroom activities can result in improved learning but not necessarily improved student satisfaction.",
"title": ""
},
{
"docid": "641610a41dc50be68cc570068bd6d451",
"text": "Preschool children (N = 107) were divided into 4 groups on the basis of maternal report; home and shelter groups exposed to verbal and physical conflict, a home group exposed to verbal conflict only, and a home control group. Parental ratings of behavior problems and competencies and children's self-report data were collected. Results show that verbal conflict only was associated with a moderate level of conduct problems: verbal plus physical conflict was associated with clinical levels of conduct problems and moderate levels of emotional problems; and verbal plus physical conflict plus shelter residence was associated with clinical levels of conduct problems, higher level of emotional problems, and lower levels of social functioning and perceived maternal acceptance. Findings suggests a direct relationship between the nature of the conflict and residence and type and extent of adjustment problems.",
"title": ""
},
{
"docid": "1adabe21b99d7b26851d78c9a607b01d",
"text": "Text Summarization is a way to produce a text, which contains the significant portion of information of the original text(s). Different methodologies are developed till now depending upon several parameters to find the summary as the position, format and type of the sentences in an input text, formats of different words, frequency of a particular word in a text etc. But according to different languages and input sources, these parameters are varied. As result the performance of the algorithm is greatly affected. The proposed approach summarizes a text without depending upon those parameters. Here, the relevance of the sentences within the text is derived by Simplified Lesk algorithm and WordNet, an online dictionary. This approach is not only independent of the format of the text and position of a sentence in a text, as the sentences are arranged at first according to their relevance before the summarization process, the percentage of summarization can be varied according to needs. The proposed approach gives around 80% accurate results on 50% summarization of the original text with respect to the manually summarized result, performed on 50 different types and lengths of texts. We have achieved satisfactory results even upto 25% summarization of the original text.",
"title": ""
},
{
"docid": "224cb33193938d5bfb8d604a86d3641a",
"text": "We show how machine vision, learning, and planning can be combined to solve hierarchical consensus tasks. Hierarchical consensus tasks seek correct answers to a hierarchy of subtasks, where branching depends on answers at preceding levels of the hierarchy. We construct a set of hierarchical classification models that aggregate machine and human effort on different subtasks and use these inferences in planning. Optimal solution of hierarchical tasks is intractable due to the branching of task hierarchy and the long horizon of these tasks. We study Monte Carlo planning procedures that can exploit task structure to constrain the policy space for tractability. We evaluate the procedures on data collected from Galaxy Zoo II in allocating human effort and show that significant gains can be achieved.",
"title": ""
},
{
"docid": "1d356c920fb720252d827164752dffe5",
"text": "In the early days of machine learning, Donald Michie introduced two orthogonal dimensions to evaluate performance of machine learning approaches – predictive accuracy and comprehensibility of the learned hypotheses. Later definitions narrowed the focus to measures of accuracy. As a consequence, statistical/neuronal approaches have been favoured over symbolic approaches to machine learning, such as inductive logic programming (ILP). Recently, the importance of comprehensibility has been rediscovered under the slogan ‘explainable AI’. This is due to the growing interest in black-box deep learning approaches in many application domains where it is crucial that system decisions are transparent and comprehensible and in consequence trustworthy. I will give a short history of machine learning research followed by a presentation of two specific approaches of symbolic machine learning – inductive logic programming and end-user programming. Furthermore, I will present current work on explanation generation. Die Arbeitsweise der Algorithmen, die über uns entscheiden, muss transparent gemacht werden, und wir müssen die Möglichkeit bekommen, die Algorithmen zu beeinflussen. Dazu ist es unbedingt notwendig, dass die Algorithmen ihre Entscheidung begründen! Peter Arbeitsloser zu John of Us, Qualityland, Marc-Uwe Kling, 2017",
"title": ""
},
{
"docid": "2382ab2b71be5dfbd1ba9fb4bf6536fc",
"text": "A full-bridge converter which employs a coupled inductor to achieve zero-voltage switching of the primary switches in the entire line and load range is described. Because the coupled inductor does not appear as a series inductance in the load current path, it does not cause a loss of duty cycle or severe voltage ringing across the output rectifier. The operation and performance of the proposed converter is verified on a 670-W prototype.",
"title": ""
},
{
"docid": "295ec5187615caec8b904c81015f4999",
"text": "As modern 64-bit x86 processors no longer support the segmentation capabilities of their 32-bit predecessors, most research projects assume that strong in-process memory isolation is no longer an affordable option. Instead of strong, deterministic isolation, new defense systems therefore rely on the probabilistic pseudo-isolation provided by randomization to \"hide\" sensitive (or safe) regions. However, recent attacks have shown that such protection is insufficient; attackers can leak these safe regions in a variety of ways.\n In this paper, we revisit isolation for x86-64 and argue that hardware features enabling efficient deterministic isolation do exist. We first present a comprehensive study on commodity hardware features that can be repurposed to isolate safe regions in the same address space (e.g., Intel MPX and MPK). We then introduce MemSentry, a framework to harden modern defense systems with commodity hardware features instead of information hiding. Our results show that some hardware features are more effective than others in hardening such defenses in each scenario and that features originally conceived for other purposes (e.g., Intel MPX for bounds checking) are surprisingly efficient at isolating safe regions compared to their software equivalent (i.e., SFI).",
"title": ""
},
{
"docid": "33fe68214ea062f2cdb310a74a9d6d8b",
"text": "In this study, the authors examine the relationship between abusive supervision and employee workplace deviance. The authors conceptualize abusive supervision as a type of aggression. They use work on retaliation and direct and displaced aggression as a foundation for examining employees' reactions to abusive supervision. The authors predict abusive supervision will be related to supervisor-directed deviance, organizational deviance, and interpersonal deviance. Additionally, the authors examine the moderating effects of negative reciprocity beliefs. They hypothesized that the relationship between abusive supervision and supervisor-directed deviance would be stronger when individuals hold higher negative reciprocity beliefs. The results support this hypothesis. The implications of the results for understanding destructive behaviors in the workplace are examined.",
"title": ""
},
{
"docid": "18acdeb37257f2f7f10a5baa8957a257",
"text": "Time-memory trade-off methods provide means to invert one way functions. Such attacks offer a flexible trade-off between running time and memory cost in accordance to users' computational resources. In particular, they can be applied to hash values of passwords in order to recover the plaintext. They were introduced by Martin Hellman and later improved by Philippe Oechslin with the introduction of rainbow tables. The drawbacks of rainbow tables are that they do not always guarantee a successful inversion. We address this issue in this paper. In the context of passwords, it is pertinent that frequently used passwords are incorporated in the rainbow table. It has been known that up to 4 given passwords can be incorporated into a chain but it is an open problem if more than 4 passwords can be achieved. We solve this problem by showing that it is possible to incorporate more of such passwords along a chain. Furthermore, we prove that this results in faster recovery of such passwords during the online running phase as opposed to assigning them at the beginning of the chains. For large chain lengths, the average improvement translates to 3 times the speed increase during the online recovery time.",
"title": ""
},
{
"docid": "2d7ff73a3fb435bd11633f650b23172e",
"text": "This study determined the effect of Tetracarpidium conophorum (black walnut) leaf extract on the male reproductive organs of albino rats. The effects of the leaf extracts were determined on the Epididymal sperm concentration, Testicular histology, and on testosterone concentration in the rat serum by a micro plate enzyme immunoassay (Testosterone assay). A total of sixteen (16) male albino wistar rats were divided into four (1, 2, 3 and 4) groups of four rats each. Group 1 served as the control and was fed with normal diet only, while groups 2, 3 and 4 were fed with 200, 400 and 600 mg/kg body weight (BW) of the extract for a period of two weeks. The Epididymal sperm concentration were not significantly affected (p>0.05) across the groups. The level of testosterone for the treatment groups 2 and 4 showed no significant difference (p>0.05) compared to the control while group 4 showed significant increase compared to that of the control (p<0.05). Pathologic changes were observed in testicular histology across the treatment groups. Robust seminiferous tubular lumen containing sperm cells and increased production of Leydig cells and Sertoli cells were observed across different treatment groups compared to that of the control.",
"title": ""
},
{
"docid": "ec4bf9499f16c415ccb586a974671bf1",
"text": "Memory circuit elements, namely memristive, memcapacitive and meminductive systems, are gaining considerable attention due to their ubiquity and use in diverse areas of science and technology. Their modeling within the most widely used environment, SPICE, is thus critical to make substantial progress in the design and analysis of complex circuits. Here, we present a collection of models of different memory circuit elements and provide a methodology for their accurate and reliable modeling in the SPICE environment. We also provide codes of these models written in the most popular SPICE versions (PSpice, LTspice, HSPICE) for the benefit of the reader. We expect this to be of great value to the growing community of scientists interested in the wide range of applications of memory circuit elements.",
"title": ""
},
{
"docid": "cff062b48160fd1551e530125a03d1f8",
"text": "In this paper, we consider a multiple-input multiple-output wireless powered communication network, where multiple users harvest energy from a dedicated power station in order to be able to transmit their information signals to an information receiving station. Employing a practical non-linear energy harvesting (EH) model, we propose a joint time allocation and power control scheme, which takes into account the uncertainty regarding the channel state information (CSI) and provides robustness against imperfect CSI knowledge. In particular, we formulate two non-convex optimization problems for different objectives, namely system sum throughput maximization and the maximization of the minimum individual throughput across all wireless powered users. To overcome the non-convexity, we apply several transformations along with a one-dimensional search to obtain an efficient resource allocation algorithm. Numerical results reveal that a significant performance gain can be achieved when the resource allocation is designed based on the adopted non-linear EH model instead of the conventional linear EH model. Besides, unlike a non-robust baseline scheme designed for perfect CSI, the proposed resource allocation schemes are shown to be robust against imperfect CSI knowledge.",
"title": ""
},
{
"docid": "acf6a62e487b79fc0500aa5e6bbb0b0b",
"text": "This paper proposes a low-cost, easily realizable strategy to equip a reinforcement learning (RL) agent the capability of behaving ethically. Our model allows the designers of RL agents to solely focus on the task to achieve, without having to worry about the implementation of multiple trivial ethical patterns to follow. Based on the assumption that the majority of human behavior, regardless which goals they are achieving, is ethical, our design integrates human policy with the RL policy to achieve the target objective with less chance of violating the ethical code that human beings normally obey.",
"title": ""
},
{
"docid": "1ffc6db796b8e8a03165676c1bc48145",
"text": "UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. e result is a practical scalable algorithm that applies to real world data. e UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.",
"title": ""
},
{
"docid": "f3a8e58eec0f243ae9fdfae78f75657d",
"text": "This paper studies the decentralized coded caching for a Fog Radio Access Network (F-RAN), whereby two edge-nodes (ENs) connected to a cloud server via fronthaul links with limited capacity are serving the requests of K r users. We consider all ENs and users are equipped with caches. A decentralized content placement is proposed to independently store contents at each network node during the off-peak hours. After that, we design a coded delivery scheme in order to deliver the user demands during the peak-hours under the objective of minimizing the normalized delivery time (NDT), which refers to the worst case delivery latency. An information-theoretic lower bound on the minimum NDT is derived for arbitrary number of ENs and users. We evaluate numerically the performance of the decentralized scheme. Additionally, we prove the approximate optimality of the decentralized scheme for a special case when the caches are only available at the ENs.",
"title": ""
}
] |
scidocsrr
|
09cf479f99dc361449129fcdf6d174b7
|
Low-Power Low-Noise CTIA Readout Integrated Circuit Design for Thermal Imaging Applications
|
[
{
"docid": "e5c4870acea1c7315cce0561f583626c",
"text": "A discussion of CMOS readout technologies for infrared (IR) imaging systems is presented. First, the description of various types of IR detector materials and structures is given. The advances of detector fabrication technology and microelectronics process technology have led to the development of large format array of IR imaging detectors. For such large IR FPA’s which is the critical component of the advanced infrared imaging system, general requirement and specifications are described. To support a good interface between FPA and downstream signal processing stage, both conventional and recently developed CMOS readout techniques are presented and discussed. Finally, future development directions including the smart focal plane concept are also introduced.",
"title": ""
}
] |
[
{
"docid": "70fbeaa603b37230d37d593a9b87f56e",
"text": "Umbilical venous catheterization is a common procedure performed in neonatal intensive care units. Hepatic collections due to inadvertent extravasation of parenteral nutrition into the liver have been described previously in literature. To recognize the clinicoradiologic features and treatment options of hepatic collections due to inadvertent extravasation of parenteral nutrition fluids caused by malpositioning of umbilical venous catheter (UVC) in the portal venous system. This is a case series describing five neonates during a 6-year period at a single tertiary care referral center, with extravasation of parenteral nutrition into the liver parenchyma causing hepatic collections. All five neonates receiving parenteral nutrition presented with abdominal distension in the second week of life. Two out of five (40%) had anemia requiring blood transfusion and 3/5 (60%) had hemodynamic instability at presentation. Ultrasound of the liver confirmed the diagnosis in all the cases. Three of the five (60%) cases underwent US-guided aspiration of the collections, one case underwent conservative management and one case required emergent laparotomy due to abdominal compartment syndrome. US used in follow-up of these cases revealed decrease in size of the lesions and/or development of calcifications. Early recognition of this complication, prompt diagnosis with US of liver and timely treatment can lead to better outcome in newborns with hepatic collections secondary to inadvertent parenteral nutrition infusion via malposition of UVC.",
"title": ""
},
{
"docid": "70242cb6aee415682c03da6bfd033845",
"text": "This paper presents a class of linear predictors for nonlinear controlled dynamical systems. The basic idea is to lift (or embed) the nonlinear dynamics into a higher dimensional space where its evolution is approximately linear. In an uncontrolled setting, this procedure amounts to numerical approximations of the Koopman operator associated to the nonlinear dynamics. In this work, we extend the Koopman operator to controlled dynamical systems and apply the Extended Dynamic Mode Decomposition (EDMD) to compute a finite-dimensional approximation of the operator in such a way that this approximation has the form of a linear controlled dynamical system. In numerical examples, the linear predictors obtained in this way exhibit a performance superior to existing linear predictors such as those based on local linearization or the so called Carleman linearization. Importantly, the procedure to construct these linear predictors is completely data-driven and extremely simple – it boils down to a nonlinear transformation of the data (the lifting) and a linear least squares problem in the lifted space that can be readily solved for large data sets. These linear predictors can be readily used to design controllers for the nonlinear dynamical system using linear controller design methodologies. We focus in particular on model predictive control (MPC) and show that MPC controllers designed in this way enjoy computational complexity of the underlying optimization problem comparable to that of MPC for a linear dynamical system with the same number of control inputs and the same dimension of the state-space. Importantly, linear inequality constraints on the state and control inputs as well as nonlinear constraints on the state can be imposed in a linear fashion in the proposed MPC scheme. Similarly, cost functions nonlinear in the state variable can be handled in a linear fashion. We treat both the full-state measurement case and the input-output case, as well as systems with disturbances / noise. Numerical examples (including a high-dimensional nonlinear PDE control) demonstrate the approach with the source code available online2.",
"title": ""
},
{
"docid": "bf36c139b531fb738bff0cabf04ef006",
"text": "A new capacitive type of MEMS microphone is presented. In contrast to existing technologies which are highly specialized for this particular type of application, our approach is based on a standard process and layer system which has been in use for more than a decade now for the manufacturing of inertial sensors. For signal conversion, a mixed-signal ASIC with digital sampling of the microphone capacitance is used. The MEMS microphone yields high signal-to-noise performance (58 dB) after mounting it in a standard LGA-type package. It is well-suited for a wide range of potential applications and demonstrates the universal scope of the used process technology.",
"title": ""
},
{
"docid": "945f129f81e9b7a69a6ba9dc982ed7c6",
"text": "Geographic location of a person is important contextual information that can be used in a variety of scenarios like disaster relief, directional assistance, context-based advertisements, etc. GPS provides accurate localization outdoors but is not useful inside buildings. We propose an coarse indoor localization approach that exploits the ubiquity of smart phones with embedded sensors. GPS is used to find the building in which the user is present. The Accelerometers are used to recognize the user’s dynamic activities (going up or down stairs or an elevator) to determine his/her location within the building. We demonstrate the ability to estimate the floor-level of a user. We compare two techniques for activity classification, one is naive Bayes classifier and the other is based on dynamic time warping. The design and implementation of a localization application on the HTC G1 platform running Google Android is also presented.",
"title": ""
},
{
"docid": "aa4d12547a6b85a34ee818f1cc71d1da",
"text": "OBJECTIVE\nDevelopment of a new framework for the National Institute on Aging (NIA) to assess progress and opportunities toward stimulating and supporting rigorous research to address health disparities.\n\n\nDESIGN\nPortfolio review of NIA's health disparities research portfolio to evaluate NIA's progress in addressing priority health disparities areas.\n\n\nRESULTS\nThe NIA Health Disparities Research Framework highlights important factors for health disparities research related to aging, provides an organizing structure for tracking progress, stimulates opportunities to better delineate causal pathways and broadens the scope for malleable targets for intervention, aiding in our efforts to address health disparities in the aging population.\n\n\nCONCLUSIONS\nThe promise of health disparities research depends largely on scientific rigor that builds on past findings and aggressively pursues new approaches. The NIA Health Disparities Framework provides a landscape for stimulating interdisciplinary approaches, evaluating research productivity and identifying opportunities for innovative health disparities research related to aging.",
"title": ""
},
{
"docid": "a7607444b58f0e86000c7f2d09551fcc",
"text": "Background modeling is a critical component for various vision-based applications. Most traditional methods tend to be inefficient when solving large-scale problems. In this paper, we introduce sparse representation into the task of large-scale stable-background modeling, and reduce the video size by exploring its discriminative frames. A cyclic iteration process is then proposed to extract the background from the discriminative frame set. The two parts combine to form our sparse outlier iterative removal (SOIR) algorithm. The algorithm operates in tensor space to obey the natural data structure of videos. Experimental results show that a few discriminative frames determine the performance of the background extraction. Furthermore, SOIR can achieve high accuracy and high speed simultaneously when dealing with real video sequences. Thus, SOIR has an advantage in solving large-scale tasks.",
"title": ""
},
{
"docid": "677f5e0ca482bf7ea7bf929ae3adbf76",
"text": "Multilevel modulation formats, such as PAM-4, have been introduced in recent years for next generation wireline communication systems for more efficient use of the available link bandwidth. High-speed ADCs with digital signal processing (DSP) can provide robust performance for such systems to compensate for the severe channel impairment as the data rate continues to increase.",
"title": ""
},
{
"docid": "af22932b48a2ea64ecf3e5ba1482564d",
"text": "Collaborative embedded systems (CES) heavily rely on information models to understand the contextual situations they are exposed to. These information models serve different purposes. First, during development time it is necessary to model the context for eliciting and documenting the requirements that a CES is supposed to achieve. Second, information models provide information to simulate different contextual situations and CES ́s behavior in these situations. Finally, CESs need information models about their context during runtime in order to react to different contextual situations and exchange context information with other CESs. Heavyweight ontologies, based on Ontology Web Language (OWL), have already proven suitable for representing knowledge about contextual situations during runtime. Furthermore, lightweight ontologies (e.g. class diagrams) have proven their practicality for creating domain specific languages for requirements documentation. However, building an ontology (lightor heavyweight) is a non-trivial task that needs to be integrated into development methods for CESs such that it serves the above stated purposes in a seamless way. This paper introduces the requirements for the building of ontologies and proposes a method that is integrated into the engineering of CESs.",
"title": ""
},
{
"docid": "e8b2498c4a81c36f1e7816c84a5074da",
"text": "Corresponding author: Magdalena Magnowska MD, PhD Department of Gynecology, Obstetrics and Gynecologic Oncology Division of Gynecologic Oncology Poznan University of Medical Sciences 33 Polna St 60-535 Poznan, Poland Phone: +48 618 419 330 Fax: +48 616 599 645 E-mail: [email protected] 1 Department of Gynecology, Obstetrics and Gynecologic Oncology, Division of Gynecologic Oncology, Poznan University of Medical Sciences, Poznan, Poland 2 Department of Biochemistry and Pathomorphology, Chair of Gynecology, Obstetrics and Gynecologic Oncology, Poznan University of Medical Sciences, Poznan, Poland",
"title": ""
},
{
"docid": "056c5033e71eecb8a683fded0dd149bb",
"text": "There is a severe lack of knowledge regarding the brain regions involved in human sexual performance in general, and female orgasm in particular. We used [15O]-H2O positron emission tomography to measure regional cerebral blood flow (rCBF) in 12 healthy women during a nonsexual resting state, clitorally induced orgasm, sexual clitoral stimulation (sexual arousal control) and imitation of orgasm (motor output control). Extracerebral markers of sexual performance and orgasm were rectal pressure variability (RPstd) and perceived level of sexual arousal (PSA). Sexual stimulation of the clitoris (compared to rest) significantly increased rCBF in the left secondary and right dorsal primary somatosensory cortex, providing the first account of neocortical processing of sexual clitoral information. In contrast, orgasm was mainly associated with profound rCBF decreases in the neocortex when compared with the control conditions (clitoral stimulation and imitation of orgasm), particularly in the left lateral orbitofrontal cortex, inferior temporal gyrus and anterior temporal pole. Significant positive correlations were found between RPstd and rCBF in the left deep cerebellar nuclei, and between PSA and rCBF in the ventral midbrain and right caudate nucleus. We propose that decreased blood flow in the left lateral orbitofrontal cortex signifies behavioural disinhibition during orgasm in women, and that deactivation of the temporal lobe is directly related to high sexual arousal. In addition, the deep cerebellar nuclei may be involved in orgasm-specific muscle contractions while the involvement of the ventral midbrain and right caudate nucleus suggests a role for dopamine in female sexual arousal and orgasm.",
"title": ""
},
{
"docid": "5f7c0161f910f0288c86349613a9b08b",
"text": "The problem of joint feature selection across a group of related tasks has applications in many areas including biomedical informatics and computer vision. We consider the 2,1-norm regularized regression model for joint feature selection from multiple tasks, which can be derived in the probabilistic framework by assuming a suitable prior from the exponential family. One appealing feature of the 2,1-norm regularization is that it encourages multiple predictors to share similar sparsity patterns. However, the resulting optimization problem is challenging to solve due to the non-smoothness of the 2,1-norm regularization. In this paper, we propose to accelerate the computation by reformulating it as two equivalent smooth convex optimization problems which are then solved via the Nesterov’s method—an optimal first-order black-box method for smooth convex optimization. A key building block in solving the reformulations is the Euclidean projection. We show that the Euclidean projection for the first reformulation can be analytically computed, while the Euclidean projection for the second one can be computed in linear time. Empirical evaluations on several data sets verify the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "60245551fe055b67e94def9fcff15bca",
"text": "Redundancy can, in general, improve the ability and performance of parallel manipulators by implementing the redundant degrees of freedom to optimize a secondary objective function. Almost all published researches in the area of parallel manipulators redundancy were focused on the design and analysis of redundant parallel manipulators with rigid (nonconfigurable) platforms and on grasping hands to be attached to the platforms. Conventional grippers usually are not appropriate to grasp irregular or large objects. Very few studies focused on the idea of using a configurable platform as a grasping device. This paper highlights the idea of using configurable platforms in both planar and spatial redundant parallel manipulators, and generalizes their analysis. The configurable platform is actually a closed kinematic chain of mobility equal to the degree of redundancy of the manipulator. The additional redundant degrees of freedom are used in reconfiguring the shape of the platform itself. Several designs of kinematically redundant planar and spatial parallel manipulators with configurable platform are presented. Such designs can be used as a grasping device especially for irregular or large objects or even as a micro-positioning device after grasping the object. Screw algebra is used to develop a general framework that can be adapted to analyze the kinematics of any general-geometry planar or spatial kinematically redundant parallel manipulator with configurable platform.",
"title": ""
},
{
"docid": "b7b2f1c59dfc00ab6776c6178aff929c",
"text": "Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the unprecedented growth in the volume and diversity of the data they generate, and the intense evolution of the methods for analyzing and using that data are radically reshaping the landscape of scientific computing. The most critical problems involve the logistics of wide-area, multistage workflows that will move back and forth across the computing continuum, between the multitude of distributed sensors, instruments and other devices at the networks edge, and the centralized resources of commercial clouds and HPC centers. We suggest that the prospects for the future integration of technological infrastructures and research ecosystems need to be considered at three different levels. First, we discuss the convergence of research applications and workflows that establish a research paradigm that combines both HPC and HDA, where ongoing progress is already motivating efforts at the other two levels. Second, we offer an account of some of the problems involved with creating a converged infrastructure for peripheral environments, that is, a shared infrastructure that can be deployed throughout the network in a scalable manner to meet the highly diverse requirements for processing, communication, and buffering/storage of massive data workflows of many different scientific domains. Third, we focus on some opportunities for software ecosystem convergence in big, logically centralized facilities that execute large-scale simulations and models and/or perform large-scale data analytics. We close by offering some conclusions and recommendations for future investment and policy review.",
"title": ""
},
{
"docid": "94a5e443ff4d6a6decdf1aeeb1460788",
"text": "Teaching the computer to understand language is the major goal in the field of natural language processing. In this thesis we introduce computational methods that aim to extract language structure— e.g. grammar, semantics or syntax— from text, which provides the computer with information in order to understand language. During the last decades, scientific efforts and the increase of computational resources made it possible to come closer to the goal of understanding language. In order to extract language structure, many approaches train the computer on manually created resources. Most of these so-called supervised methods show high performance when applied to similar textual data. However, they perform inferior when operating on textual data, which are different to the one they are trained on. Whereas training the computer is essential to obtain reasonable structure from natural language, we want to avoid training the computer using manually created resources. In this thesis, we present so-called unsupervisedmethods, which are suited to learn patterns in order to extract structure from textual data directly. These patterns are learned with methods that extract the semantics (meanings) of words and phrases. In comparison to manually built knowledge bases, unsupervised methods are more flexible: they can extract structure from text of different languages or text domains (e.g. finance or medical texts), without requiring manually annotated structure. However, learning structure from text often faces sparsity issues. The reason for these phenomena is that in language many words occur only few times. If a word is seen only few times no precise information can be extracted from the text it occurs. Whereas sparsity issues cannot be solved completely, information about most words can be gained by using large amounts of data. In the first chapter, we briefly describe how computers can learn to understand language. Afterwards, we present the main contributions, list the publications this thesis is based on and give an overview of this thesis. Chapter 2 introduces the terminology used in this thesis and gives a background about natural language processing. Then, we characterize the linguistic theory on how humans understand language. Afterwards, we show how the underlying linguistic intuition can be",
"title": ""
},
{
"docid": "f60bf27f4f557ba4705b1f75b743e932",
"text": "Intelligent fashion outfit composition becomes more and more popular in these years. Some deep learning based approaches reveal competitive composition recently. However, the unexplainable characteristic makes such deep learning based approach cannot meet the the designer, businesses and consumers’ urge to comprehend the importance of different attributes in an outfit composition. To realize interpretable and customized fashion outfit compositions, we propose a partitioned embedding network to learn interpretable representations from clothing items. The overall network architecture consists of three components: an auto-encoder module, a supervised attributes module and a multi-independent module. The auto-encoder module serves to encode all useful information into the embedding. In the supervised attributes module, multiple attributes labels are adopted to ensure that different parts of the overall embedding correspond to different attributes. In the multi-independent module, adversarial operation are adopted to fulfill the mutually independent constraint. With the interpretable and partitioned embedding, we then construct an outfit composition graph and an attribute matching map. Given specified attributes description, our model can recommend a ranked list of outfit composition with interpretable matching scores. Extensive experiments demonstrate that 1) the partitioned embedding have unmingled parts which corresponding to different attributes and 2) outfits recommended by our model are more desirable in comparison with the existing methods.",
"title": ""
},
{
"docid": "a0e7712da82a338fda01e1fd0bb4a44e",
"text": "Compliance specifications concisely describe selected aspects of what a business operation should adhere to. To enable automated techniques for compliance checking, it is important that these requirements are specified correctly and precisely, describing exactly the behavior intended. Although there are rigorous mathematical formalisms for representing compliance rules, these are often perceived to be difficult to use for business users. Regardless of notation, however, there are often subtle but important details in compliance requirements that need to be considered. The main challenge in compliance checking is to bridge the gap between informal description and a precise specification of all requirements. In this paper, we present an approach which aims to facilitate creating and understanding formal compliance requirements by providing configurable templates that capture these details as options for commonly-required compliance requirements. These options are configured interactively with end-users, using question trees and natural language. The approach is implemented in the Process Mining Toolkit ProM.",
"title": ""
},
{
"docid": "b4c5337997d33fce8553709a6d727d75",
"text": "Helicopters are often used to transport supplies and equipment to hard-to-reach areas. When a load is carried via suspension cables below a helicopter, the load oscillates in response to helicopter motion and external disturbances, such as wind. This oscillation is dangerous and adversely affects control of the helicopter, especially when carrying heavy loads. To provide better control over the helicopter, one approach is to suppress the load swing dynamics using a command-filtering method called input shaping. This approach does not require real-time measurement or estimation of the load states. A simple model of a helicopter carrying a suspended load is developed and experimentally verified on a micro coaxial radio-controlled helicopter. In addition, the effectiveness of input shaping at eliminating suspended load oscillation is demonstrated on the helicopter. The proposed model may assist with the design of input-shaping controllers for a wide array of helicopters carrying suspended loads.",
"title": ""
},
{
"docid": "c5e56d3ff1fbc7ebbdb691d1db66cdf9",
"text": "Most data mining research is concerned with building high-quality classification models in isolation. In massive production systems, however, the ability to monitor and maintain performance over time while growing in size and scope is equally important. Many external factors may degrade classification performance including changes in data distribution, noise or bias in the source data, and the evolution of the system itself. A well-functioning system must gracefully handle all of these. This paper lays out a set of design principles for large-scale autonomous data mining systems and then demonstrates our application of these principles within the m6d automated ad targeting system. We demonstrate a comprehensive set of quality control processes that allow us monitor and maintain thousands of distinct classification models automatically, and to add new models, take on new data, and correct poorly-performing models without manual intervention or system disruption.",
"title": ""
},
{
"docid": "83c184c457e35e80ce7ff8012b5dcd06",
"text": "The goal of this paper is to enable a 3D “virtual-tour” of an apartment given a small set of monocular images of different rooms, as well as a 2D floor plan. We frame the problem as inference in a Markov Random Field which reasons about the layout of each room and its relative pose (3D rotation and translation) within the full apartment. This gives us accurate camera pose in the apartment for each image. What sets us apart from past work in layout estimation is the use of floor plans as a source of prior knowledge, as well as localization of each image within a bigger space (apartment). In particular, we exploit the floor plan to impose aspect ratio constraints across the layouts of different rooms, as well as to extract semantic information, e.g., the location of windows which are marked in floor plans. We show that this information can significantly help in resolving the challenging room-apartment alignment problem. We also derive an efficient exact inference algorithm which takes only a few ms per apartment. This is due to the fact that we exploit integral geometry as well as our new bounds on the aspect ratio of rooms which allow us to carve the space, significantly reducing the number of physically possible configurations. We demonstrate the effectiveness of our approach on a new dataset which contains over 200 apartments.",
"title": ""
},
{
"docid": "6882f244253e0367b85c76bd4884ddaa",
"text": "Publishers of news information are keen to amplify the reach of their content by making it as re-sharable as possible on social media. In this work we study the relationship between the concept of social deviance and the re-sharing of news headlines by network gatekeepers on Twitter. Do network gatekeepers have the same predilection for selecting socially deviant news items as professionals? Through a study of 8,000 news items across 8 major news outlets in the U.S. we predominately find that network gatekeepers re-share news items more often when they reference socially deviant events. At the same time we find and discuss exceptions for two outlets, suggesting a more complex picture where newsworthiness for networked gatekeepers may be moderated by other effects such as topicality or varying motivations and relationships with their audience.",
"title": ""
}
] |
scidocsrr
|
2194ef1ab674e0f341aade34f6073ca0
|
Mobile cloud computing: A survey
|
[
{
"docid": "ca4d2862ba75bfc35d8e9ada294192e1",
"text": "This paper provides a model that realistically represents the movements in a disaster area scenario. The model is based on an analysis of tactical issues of civil protection. This analysis provides characteristics influencing network performance in public safety communication networks like heterogeneous area-based movement, obstacles, and joining/leaving of nodes. As these characteristics cannot be modelled with existing mobility models, we introduce a new disaster area mobility model. To examine the impact of our more realistic modelling, we compare it to existing ones (modelling the same scenario) using different pure movement and link based metrics. The new model shows specific characteristics like heterogeneous node density. Finally, the impact of the new model is evaluated in an exemplary simulative network performance analysis. The simulations show that the new model discloses new information and has a significant impact on performance analysis.",
"title": ""
}
] |
[
{
"docid": "1f8a386867ba1157655eda86a80f4555",
"text": "Body weight, length, and vocal tract length were measured for 23 rhesus macaques (Macaca mulatta) of various sizes using radiographs and computer graphic techniques. linear predictive coding analysis of tape-recorded threat vocalizations were used to determine vocal tract resonance frequencies (\"formants\") for the same animals. A new acoustic variable is proposed, \"formant dispersion,\" which should theoretically depend upon vocal tract length. Formant dispersion is the averaged difference between successive formant frequencies, and was found to be closely tied to both vocal tract length and body size. Despite the common claim that voice fundamental frequency (F0) provides an acoustic indication of body size, repeated investigations have failed to support such a relationship in many vertebrate species including humans. Formant dispersion, unlike voice pitch, is proposed to be a reliable predictor of body size in macaques, and probably many other species.",
"title": ""
},
{
"docid": "f9b11e55be907175d969cd7e76803caf",
"text": "In this paper, we consider the multivariate Bernoulli distribution as a model to estimate the structure of graphs with binary nodes. This distribution is discussed in the framework of the exponential family, and its statistical properties regarding independence of the nodes are demonstrated. Importantly the model can estimate not only the main effects and pairwise interactions among the nodes but also is capable of modeling higher order interactions, allowing for the existence of complex clique effects. We compare the multivariate Bernoulli model with existing graphical inference models – the Ising model and the multivariate Gaussian model, where only the pairwise interactions are considered. On the other hand, the multivariate Bernoulli distribution has an interesting property in that independence and uncorrelatedness of the component random variables are equivalent. Both the marginal and conditional distributions of a subset of variables in the multivariate Bernoulli distribution still follow the multivariate Bernoulli distribution. Furthermore, the multivariate Bernoulli logistic model is developed under generalized linear model theory by utilizing the canonical link function in order to include covariate information on the nodes, edges and cliques. We also consider variable selection techniques such as LASSO in the logistic model to impose sparsity structure on the graph. Finally, we discuss extending the smoothing spline ANOVA approach to the multivariate Bernoulli logistic model to enable estimation of non-linear effects of the predictor variables.",
"title": ""
},
{
"docid": "b4dd6c9634e86845795bcbe32216ee44",
"text": "Several program analysis tools - such as plagiarism detection and bug finding - rely on knowing a piece of code's relative semantic importance. For example, a plagiarism detector should not bother reporting two programs that have an identical simple loop counter test, but should report programs that share more distinctive code. Traditional program analysis techniques (e.g., finding data and control dependencies) are useful, but do not say how surprising or common a line of code is. Natural language processing researchers have encountered a similar problem and addressed it using an n-gram model of text frequency, derived from statistics computed over text corpora.\n We propose and compute an n-gram model for programming languages, computed over a corpus of 2.8 million JavaScript programs we downloaded from the Web. In contrast to previous techniques, we describe a code n-gram as a subgraph of the program dependence graph that contains all nodes and edges reachable in n steps from the statement. We can count n-grams in a program and count the frequency of n-grams in the corpus, enabling us to compute tf-idf-style measures that capture the differing importance of different lines of code. We demonstrate the power of this approach by implementing a plagiarism detector with accuracy that beats previous techniques, and a bug-finding tool that discovered over a dozen previously unknown bugs in a collection of real deployed programs.",
"title": ""
},
{
"docid": "f28a91e0cdb4c3528a6d04cf549358b4",
"text": "This paper presents an algorithm for calibrating erroneous tri-axis magnetometers in the magnetic field domain. Unlike existing algorithms, no simplification is made on the nature of errors to ease the estimation. A complete error model, including instrumentation errors (scale factors, nonorthogonality, and offsets) and magnetic deviations (soft and hard iron) on the host platform, is elaborated. An adaptive least squares estimator provides a consistent solution to the ellipsoid fitting problem and the magnetometer’s calibration parameters are derived. The calibration is experimentally assessed with two artificial magnetic perturbations introduced close to the sensor on the host platform and without additional perturbation. In all configurations, the algorithm successfully converges to a good estimate of the said errors. Comparing the magnetically derived headings with a GNSS/INS reference, the results show a major improvement in terms of heading accuracy after the calibration.",
"title": ""
},
{
"docid": "50dc3186ad603ef09be8cca350ff4d77",
"text": "Design iteration time in SoC design flow is reduced through performance exploration at a higher level of abstraction. This paper proposes an accurate and fast performance analysis method in early stage of design process using a behavioral model written in C/C++ language. We made a cycle-accurate but fast and flexible compiled instruction set simulator (ISS) and IP models that represent hardware functionality and performance. System performance analyzer configured by the target communication architecture analyzes the performance utilizing event-traces obtained by running the ISS and IP models. This solution is automated and implemented in the tool, HIPA. We obtain diverse performance profiling results and achieve 95% accuracy using an abstracted C model. We also achieve about 20 times speed-up over corresponding co-simulation tools.",
"title": ""
},
{
"docid": "78ca8024a825fc8d5539b899ad34fc18",
"text": "In this paper, we examine whether managers use optimistic and pessimistic language in earnings press releases to provide information about expected future firm performance to the market, and whether the market responds to optimistic and pessimistic language usage in earnings press releases after controlling for the earnings surprise and other factors likely to influence the market’s response to the earnings announcement. We use textual-analysis software to measure levels of optimistic and pessimistic language for a sample of approximately 24,000 earnings press releases issued between 1998 and 2003. We find a positive (negative) association between optimistic (pessimistic) language usage and future firm performance and a significant incremental market response to optimistic and pessimistic language usage in earnings press releases. Results suggest managers use optimistic and pessimistic language to provide credible information about expected future firm performance to the market, and that the market responds to managers’ language usage.",
"title": ""
},
{
"docid": "3fc74e621d0e485e1e706367d30e0bad",
"text": "Many commercial navigation aids suffer from a number of design flaws, the most important of which are related to the human interface that conveys information to the user. Aids for the visually impaired are lightweight electronic devices that are either incorporated into a long cane, hand-held or worn by the client, warning of hazards ahead. Most aids use vibrating buttons or sound alerts to warn of upcoming obstacles, a method which is only capable of conveying very crude information regarding direction and proximity to the nearest object. Some of the more sophisticated devices use a complex audio interface in order to deliver more detailed information, but this often compromises the user's hearing, a critical impairment for a blind user. The author has produced an original design and working prototype solution which is a major first step in addressing some of these faults found in current production models for the blind.",
"title": ""
},
{
"docid": "cbfdea54abb1e4c1234ca44ca6913220",
"text": "Seeds of chickpea (Cicer arietinum L.) were exposed in batches to static magnetic fields of strength from 0 to 250 mT in steps of 50 mT for 1-4 h in steps of 1 h for all fields. Results showed that magnetic field application enhanced seed performance in terms of laboratory germination, speed of germination, seedling length and seedling dry weight significantly compared to unexposed control. However, the response varied with field strength and duration of exposure without any particular trend. Among the various combinations of field strength and duration, 50 mT for 2 h, 100 mT for 1 h and 150 mT for 2 h exposures gave best results. Exposure of seeds to these three magnetic fields improved seed coat membrane integrity as it reduced the electrical conductivity of seed leachate. In soil, seeds exposed to these three treatments produced significantly increased seedling dry weights of 1-month-old plants. The root characteristics of the plants showed dramatic increase in root length, root surface area and root volume. The improved functional root parameters suggest that magnetically treated chickpea seeds may perform better under rainfed (un-irrigated) conditions where there is a restrictive soil moisture regime.",
"title": ""
},
{
"docid": "f560be243747927a7d6873ca0f87d9c6",
"text": "Hydrophobic interaction chromatography-high performance liquid chromatography (HIC-HPLC) is a powerful analytical method used for the separation of molecular variants of therapeutic proteins. The method has been employed for monitoring various post-translational modifications, including proteolytic fragments and domain misfolding in etanercept (Enbrel®); tryptophan oxidation, aspartic acid isomerization, the formation of cyclic imide, and α amidated carboxy terminus in recombinant therapeutic monoclonal antibodies; and carboxy terminal heterogeneity and serine fucosylation in Fc and Fab fragments. HIC-HPLC is also a powerful analytical technique for the analysis of antibody-drug conjugates. Most current analytical columns, methods, and applications are described, and critical method parameters and suitability for operation in regulated environment are discussed, in this review.",
"title": ""
},
{
"docid": "3b45e971fd172b01045d8e5241514b37",
"text": "Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present and evaluate a design that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external observer. In our approach, the advice-giver watches the learner and occasionally makes suggestions, expressed as instructions in a simple imperative programming language. Based on techniques from knowledge-based neural networks, we insert these programs directly into the agent‘s utility function. Subsequent reinforcement learning further integrates and refines the advice. We present empirical evidence that investigates several aspects of our approach and shows that, given good advice, a learner can achieve statistically significant gains in expected reward. A second experiment shows that advice improves the expected reward regardless of the stage of training at which it is given, while another study demonstrates that subsequent advice can result in further gains in reward. Finally, we present experimental results that indicate our method is more powerful than a naive technique for making use of advice.",
"title": ""
},
{
"docid": "81e1d86f37d88bfdc39602d2e04dfa20",
"text": "The working memory framework was used to investigate the factors determining the phenomenological vividness of images. Participants rated the vividness of visual or auditory images under control conditions or while performing tasks that differentially disrupted the visuospatial sketchpad and phonological loop subsystems of working memory. In Experiments 1, 2, and 6, participants imaged recently presented novel visual patterns and sequences of tones; ratings of vividness showed the predicted interaction between stimulus modality and concurrent task. The images in experiments 3, 4, 5, and 6 were based on long-term memory (LTM). They also showed an image modality by task interaction, with a clear effect of LTM variables (meaningfulness, activity, bizarreness, and stimulus familiarity), implicating both working memory and LTM in the experience of vividness.",
"title": ""
},
{
"docid": "e099186ceed71e03276ab168ecf79de7",
"text": "Twelve patients with deafferentation pain secondary to central nervous system lesions were subjected to chronic motor cortex stimulation. The motor cortex was mapped as carefully as possible and the electrode was placed in the region where muscle twitch of painful area can be observed with the lowest threshold. 5 of the 12 patients reported complete absence of previous pain with intermittent stimulation at 1 year following the initiation of this therapy. Improvements in hemiparesis was also observed in most of these patients. The pain of these patients was typically barbiturate-sensitive and morphine-resistant. Another 3 patients had some degree of residual pain but considerable reduction of pain was still obtained by stimulation. Thus, 8 of the 12 patients (67%) had continued effect of this therapy after 1 year. In 3 patients, revisions of the electrode placement were needed because stimulation became incapable of inducing muscle twitch even with higher stimulation intensity. The effect of stimulation on pain and capability of producing muscle twitch disappeared simultaneously in these cases and the effect reappeared after the revisions, indicating that appropriate stimulation of the motor cortex is definitely necessary for obtaining satisfactory pain control in these patients. None of the patients subjected to this therapy developed neither observable nor electroencephalographic seizure activity.",
"title": ""
},
{
"docid": "75fd1706bb96a1888dc9939dbe5359c2",
"text": "In this paper, we present a novel approach to ide ntify feature specific expressions of opinion in product reviews with different features and mixed emotions . The objective is realized by identifying a set of potential features in the review and extract ing opinion expressions about those features by exploiting their associatio ns. Capitalizing on the view that more closely associated words come togeth er to express an opinion about a certain feature, dependency parsing i used to identify relations between the opinion expressions. The syst em learns the set of significant relations to be used by dependency parsing and a threshold parameter which allows us to merge closely associated opinio n expressions. The data requirement is minimal as thi is a one time learning of the domain independent parameters . The associations are represented in the form of a graph which is partiti oned to finally retrieve the opinion expression describing the user specified feature. We show that the system achieves a high accuracy across all domains and performs at par with state-of-the-art systems despi t its data limitations.",
"title": ""
},
{
"docid": "d87e9a6c62c100142523baddc499320c",
"text": "Intelligent behaviour in the real-world requires the ability to acquire new knowledge from an ongoing sequence of experiences while preserving and reusing past knowledge. We propose a novel algorithm for unsupervised representation learning from piece-wise stationary visual data: Variational Autoencoder with Shared Embeddings (VASE). Based on the Minimum Description Length principle, VASE automatically detects shifts in the data distribution and allocates spare representational capacity to new knowledge, while simultaneously protecting previously learnt representations from catastrophic forgetting. Our approach encourages the learnt representations to be disentangled, which imparts a number of desirable properties: VASE can deal sensibly with ambiguous inputs, it can enhance its own representations through imagination-based exploration, and most importantly, it exhibits semantically meaningful sharing of latents between different datasets. Compared to baselines with entangled representations, our approach is able to reason beyond surface-level statistics and perform semantically meaningful cross-domain inference.",
"title": ""
},
{
"docid": "69b1c87a06b1d83fd00d9764cdadc2e9",
"text": "Sarcos Research Corporation, and the Center for Engineering Design at the University of Utah, have long been interested in both the fundamental and the applied aspects of robots and other computationally driven machines. We have produced substantial numbers of systems that function as products for commercial applications, and as advanced research tools specifically designed for experimental",
"title": ""
},
{
"docid": "f44e5926c2aa6ff311cb2505e856217a",
"text": "This paper investigates the possibility of implementing node positioning in the ZigBee wireless sensor network by using a readily available Received Signal Strength Indicator (RSSI) infrastructure provided by the physical layer of 802.15.4 networks. In this study the RSSI is converted to the distance providing the basis for using the trilateration methods for location estimation. The software written in C# is used to solve the trilateration problem and the final results of trilateration methods are mapped using Google maps. Providing node positioning capability to the ZigBee network offers an enormous benefit to the Wireless Sensor Networks applications, possibly extending the functionality of existing software solution to include node tracking and monitoring without an additional hardware investment.",
"title": ""
},
{
"docid": "32d79366936e301c44ae4ac11784e9d8",
"text": "A vast literature describes transformational leadership in terms of leader having charismatic and inspiring personality, stimulating followers, and providing them with individualized consideration. A considerable empirical support exists for transformation leadership in terms of its positive effect on followers with respect to criteria like effectiveness, extra role behaviour and organizational learning. This study aims to explore the effect of transformational leadership characteristics on followers’ job satisfaction. Survey method was utilized to collect the data from the respondents. The study reveals that individualized consideration and intellectual stimulation affect followers’ job satisfaction. However, intellectual stimulation is positively related with job satisfaction and individualized consideration is negatively related with job satisfaction. Leader’s charisma or inspiration was found to be having no affect on the job satisfaction. The three aspects of transformational leadership were tested against job satisfaction through structural equation modeling using Amos.",
"title": ""
},
{
"docid": "c1389acb62cca5cb3cfdec34bd647835",
"text": "A Chinese resume information extraction system (CRIES) based on semi-structured text is designed and implemented to obtain formatted information by extracting text content of every field from resumes in different formats and update information automatically based on the web. Firstly, ideas to classify resumes, some constraints obtained by analyzing resume features and overall extraction strategy is introduced. Then two extraction algorithms for parsing resumes in different text formats are given. Consequently, the system was implemented by java programming. Finally, use the system to resolve the resume samples, and the statistical analysis and system optimization analysis are carried out according to the accuracy rate and recall rate of the extracted results.",
"title": ""
},
{
"docid": "18f530c400498658d73aba21f0ce984e",
"text": "Anomaly and event detection has been studied widely for having many applications in fraud detection, network intrusion detection, detection of epidemic outbreaks, and so on. In this paper we propose an algorithm that operates on a time-varying network of agents with edges representing interactions between them and (1) spots \"anomalous\" points in time at which many agents \"change\" their behavior in a way it deviates from the norm; and (2) attributes the detected anomaly to those agents that contribute to the \"change\" the most. Experiments on a large mobile phone network (of 2 million anonymous customers with 50 million interactions over a period of 6 months) shows that the \"change\"-points detected by our algorithm coincide with the social events and the festivals in our data.",
"title": ""
},
{
"docid": "536e45f7130aa40625e3119523d2e1de",
"text": "We consider the problem of Simultaneous Localization and Mapping (SLAM) from a Bayesian point of view using the Rao-Blackwellised Particle Filter (RBPF). We focus on the class of indoor mobile robots equipped with only a stereo vision sensor. Our goal is to construct dense metric maps of natural 3D point landmarks for large cyclic environments in the absence of accurate landmark position measurements and reliable motion estimates. Landmark estimates are derived from stereo vision and motion estimates are based on visual odometry. We distinguish between landmarks using the Scale Invariant Feature Transform (SIFT). Our work defers from current popular approaches that rely on reliable motion models derived from odometric hardware and accurate landmark measurements obtained with laser sensors. We present results that show that our model is a successful approach for vision-based SLAM, even in large environments. We validate our approach experimentally, producing the largest and most accurate vision-based map to date, while we identify the areas where future research should focus in order to further increase its accuracy and scalability to significantly larger",
"title": ""
}
] |
scidocsrr
|
39bab4f77ae27b7d60f132efac4d0499
|
How to use attribute-based encryption to implement role-based access control in the cloud
|
[
{
"docid": "d4ee96388ca88c0a5d2a364f826dea91",
"text": "Cloud computing, as an emerging computing paradigm, enables users to remotely store their data into a cloud so as to enjoy scalable services on-demand. Especially for small and medium-sized enterprises with limited budgets, they can achieve cost savings and productivity enhancements by using cloud-based services to manage projects, to make collaborations, and the like. However, allowing cloud service providers (CSPs), which are not in the same trusted domains as enterprise users, to take care of confidential data, may raise potential security and privacy issues. To keep the sensitive user data confidential against untrusted CSPs, a natural way is to apply cryptographic approaches, by disclosing decryption keys only to authorized users. However, when enterprise users outsource confidential data for sharing on cloud servers, the adopted encryption system should not only support fine-grained access control, but also provide high performance, full delegation, and scalability, so as to best serve the needs of accessing data anytime and anywhere, delegating within enterprises, and achieving a dynamic set of users. In this paper, we propose a scheme to help enterprises to efficiently share confidential data on cloud servers. We achieve this goal by first combining the hierarchical identity-based encryption (HIBE) system and the ciphertext-policy attribute-based encryption (CP-ABE) system, and then making a performance-expressivity tradeoff, finally applying proxy re-encryption and lazy re-encryption to our scheme.",
"title": ""
}
] |
[
{
"docid": "f9f0241c02486f6760951d3ac33cc861",
"text": "Clinical evidence indicates that swallowing, a vital function, may be impaired in sleep. To address this issue, we elicited swallows in awake and sleeping adult cats by injecting water through a nasopharyngeal tube. Our results indicate that swallowing occurs not only in non-rapid eye movement (NREM) sleep, but also in rapid eye movement (REM) sleep. In NREM sleep, the injections often caused arousal followed by swallowing, but, in the majority of cases, swallowing occurred in NREM sleep before arousal. These swallows in NREM sleep were entirely comparable to swallows in wakefulness. In contrast, the injections in REM sleep were less likely to cause arousal, and the swallows occurred as hypotonic events. Furthermore, apneas were sometimes elicited by the injections in REM sleep, and there was repetitive swallowing upon arousal. These results suggest that the hypotonic swallows of REM sleep were ineffective.",
"title": ""
},
{
"docid": "747df95d08e6e5b1802dacf4e84b6642",
"text": "One of the key requirement of many schemes is that of random numbers. Sequence of random numbers are used at several stages of a standard cryptographic protocol. A simple example is of a Vernam cipher, where a string of random numbers is added to massage string to generate the encrypted code. It is represented as C = M ⊕ K where M is the message, K is the key and C is the ciphertext. It has been mathematically shown that this simple scheme is unbreakable is key K as long as M and is used only once. For a good cryptosystem, the security of the cryptosystem is not be based on keeping the algorithm secret but solely on keeping the key secret. The quality and unpredictability of secret data is critical to securing communication by modern cryptographic techniques. Generation of such data for cryptographic purposes typically requires an unpredictable physical source of random data. In this manuscript, we present studies of three different methods for producing random number. We have tested them by studying its frequency, correlation as well as using the test suit from NIST.",
"title": ""
},
{
"docid": "0dc3c4e628053e8f7c32c0074a2d1a59",
"text": "Understanding inter-character relationships is fundamental for understanding character intentions and goals in a narrative. This paper addresses unsupervised modeling of relationships between characters. We model relationships as dynamic phenomenon, represented as evolving sequences of latent states empirically learned from data. Unlike most previous work our approach is completely unsupervised. This enables data-driven inference of inter-character relationship types beyond simple sentiment polarities, by incorporating lexical and semantic representations, and leveraging large quantities of raw text. We present three models based on rich sets of linguistic features that capture various cues about relationships. We compare these models with existing techniques and also demonstrate that relationship categories learned by our model are semantically coherent.",
"title": ""
},
{
"docid": "8d56aa104cb727bd6496cac89f1f7d9c",
"text": "In this paper, we develop a semantic annotation technique for location-based social networks to automatically annotate all places with category tags which are a crucial prerequisite for location search, recommendation services, or data cleaning. Our annotation algorithm learns a binary support vector machine (SVM) classifier for each tag in the tag space to support multi-label classification. Based on the check-in behavior of users, we extract features of places from i) explicit patterns (EP) of individual places and ii) implicit relatedness (IR) among similar places. The features extracted from EP are summarized from all check-ins at a specific place. The features from IR are derived by building a novel network of related places (NRP) where similar places are linked by virtual edges. Upon NRP, we determine the probability of a category tag for each place by exploring the relatedness of places. Finally, we conduct a comprehensive experimental study based on a real dataset collected from a location-based social network, Whrrl. The results demonstrate the suitability of our approach and show the strength of taking both EP and IR into account in feature extraction.",
"title": ""
},
{
"docid": "b61c9f69a2fffcf2c3753e51a3bbfa14",
"text": "..............................................................................................................ix 1 Interoperability .............................................................................................1 1.",
"title": ""
},
{
"docid": "db215a998da127466bcb5e80b750cbbb",
"text": "to design and build computing systems capable of running themselves, adjusting to varying circumstances, and preparing their resources to handle most efficiently the workloads we put upon them. These autonomic systems must anticipate needs and allow users to concentrate on what they want to accomplish rather than figuring how to rig the computing systems to get them there. Abtract The performance of current shared-memory multiprocessor systems depends on both the efficient utilization of all the architectural elements in the system (processors, memory, etc), and the workload characteristics. This Thesis has the main goal of improving the execution of workloads of parallel applications in shared-memory multiprocessor systems by using real performance information in the processor scheduling. In multiprocessor systems, users request for resources (processors) to execute their parallel applications. The Operating System is responsible to distribute the available physical resources among parallel applications in the more convenient way for both the system and the application performance. It is a typical practice of users in multiprocessor systems to request for a high number of processors assuming that the higher the processor request, the higher the number of processors allocated, and the higher the speedup achieved by their applications. However, this is not true. Parallel applications have different characteristics with respect to their scalability. Their speedup also depends on run-time parameters such as the influence of the rest of running applications. This Thesis proposes that the system should not base its decisions on the users requests only, but the system must decide, or adjust, its decisions based on real performance information calculated at run-time. The performance of parallel applications is an information that the system can dynamically measure without introducing a significant penalty in the application execution time. Using this information, the processor allocation can be decided, or modified, being robust to incorrect processor requests given by users. We also propose that the system use a target efficiency to ensure the efficient use of processors. This target efficiency is a system parameter and can be dynamically decided as a function of the characteristics of running applications or the number of queued applications. We also propose to coordinate the different scheduling levels that operate in the processor scheduling: the run-time scheduler, the processor scheduler, and the queueing system. We propose to establish an interface between levels to send and receive information, and to take scheduling decisions considering the information provided by the rest of …",
"title": ""
},
{
"docid": "6870efe6d9607c82992b5015a5336969",
"text": "We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.",
"title": ""
},
{
"docid": "fa7682dc85d868e57527fdb3124b309c",
"text": "The seminal 2003 paper by Cosley, Lab, Albert, Konstan, and Reidl, demonstrated the susceptibility of recommender systems to rating biases. To facilitate browsing and selection, almost all recommender systems display average ratings before accepting ratings from users which has been shown to bias ratings. This effect is called Social Inuence Bias (SIB); the tendency to conform to the perceived \\norm\" in a community. We propose a methodology to 1) learn, 2) analyze, and 3) mitigate the effect of SIB in recommender systems. In the Learning phase, we build a baseline dataset by allowing users to rate twice: before and after seeing the average rating. In the Analysis phase, we apply a new non-parametric significance test based on the Wilcoxon statistic to test whether the data is consistent with SIB. If significant, we propose a Mitigation phase using polynomial regression and the Bayesian Information Criterion (BIC) to predict unbiased ratings. We evaluate our approach on a dataset of 9390 ratings from the California Report Card (CRC), a rating-based system designed to encourage political engagement. We found statistically significant evidence of SIB. Mitigating models were able to predict changed ratings with a normalized RMSE of 12.8% and reduce bias by 76.3%. The CRC, our data, and experimental code are available at: http://californiareportcard.org/data/",
"title": ""
},
{
"docid": "447b689d9c7c2a6b71baf2fac2fa2a4f",
"text": "Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract Various routing protocols, including Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (ISIS), explicitly allow \"Equal-Cost Multipath\" (ECMP) routing. Some router implementations also allow equal-cost multipath usage with RIP and other routing protocols. The effect of multipath routing on a forwarder is that the forwarder potentially has several next-hops for any given destination and must use some method to choose which next-hop should be used for a given data packet.",
"title": ""
},
{
"docid": "d8b0ef94385d1379baeb499622253a02",
"text": "Mining association rules associates events that took place together. In market basket analysis, these discovered rules associate items purchased together. Items that are not part of a transaction are not considered. In other words, typical association rules do not take into account items that are part of the domain but that are not together part of a transaction. Association rules are based on frequencies and count the transactions where items occur together. However, counting absences of items is prohibitive if the number of possible items is very large, which is typically the case. Nonetheless, knowing the relationship between the absence of an item and the presence of another can be very important in some applications. These rules are called negative association rules. We review current approaches for mining negative association rules and we discuss limitations and future research directions.",
"title": ""
},
{
"docid": "81c2fca06af30c27e74267dbccd84080",
"text": "Instability and variability of Deep Reinforcement Learning (DRL) algorithms tend to adversely affect their performance. Averaged-DQN is a simple extension to the DQN algorithm, based on averaging previously learned Q-values estimates, which leads to a more stable training procedure and improved performance by reducing approximation error variance in the target values. To understand the effect of the algorithm, we examine the source of value function estimation errors and provide an analytical comparison within a simplified model. We further present experiments on the Arcade Learning Environment benchmark that demonstrate significantly improved stability and performance due to the proposed extension.",
"title": ""
},
{
"docid": "92e186ba05566110020ed92df960f3d5",
"text": "From just a glance, humans can make rich predictions about the future state of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains and require direct measurements of the underlying states. We introduce the Visual Interaction Network, a general-purpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual front-end learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions and dynamics, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. Our results demonstrate that the perceptual module and the object-based dynamics predictor module can induce factored latent representations that support accurate dynamical predictions. This work opens new opportunities for model-based decision-making and planning from raw sensory observations in complex physical environments.",
"title": ""
},
{
"docid": "766b18cdae33d729d21d6f1b2b038091",
"text": "1.1 Terminology Intercultural communication or communication between people of different cultural backgrounds has always been and will probably remain an important precondition of human co-existance on earth. The purpose of this paper is to provide a framework of factors thatare important in intercultural communication within a general model of human, primarily linguistic, communication. The term intercultural is chosen over the largely synonymousterm cross-cultural because it is linked to language use such as “interdisciplinary”, that is cooperation between people with different scientific backgrounds. Perhaps the term also has somewhat fewer connotations than crosscultural. It is not cultures that communicate, whatever that might imply, but people (and possibly social institutions) with different cultural backgrounds that do. In general, the term”cross-cultural” is probably best used for comparisons between cultures (”crosscultural comparison”).",
"title": ""
},
{
"docid": "4c3b4a6c173a40327c2db17772cbd242",
"text": "We reproduce four Twitter sentiment classification approaches that participated in previous SemEval editions with diverse feature sets. The reproduced approaches are combined in an ensemble, averaging the individual classifiers’ confidence scores for the three classes (positive, neutral, negative) and deciding sentiment polarity based on these averages. The experimental evaluation on SemEval data shows our re-implementations to slightly outperform their respective originals. Moreover, not too surprisingly, the ensemble of the reproduced approaches serves as a strong baseline in the current edition where it is top-ranked on the 2015 test set.",
"title": ""
},
{
"docid": "1d56b3aa89484e3b25557880ec239930",
"text": "We present an FPGA accelerator for the Non-uniform Fast Fourier Transform, which is a technique to reconstruct images from arbitrarily sampled data. We accelerate the compute-intensive interpolation step of the NuFFT Gridding algorithm by implementing it on an FPGA. In order to ensure efficient memory performance, we present a novel FPGA implementation for Geometric Tiling based sorting of the arbitrary samples. The convolution is then performed by a novel Data Translation architecture which is composed of a multi-port local memory, dynamic coordinate-generator and a plug-and-play kernel pipeline. Our implementation is in single-precision floating point and has been ported onto the BEE3 platform. Experimental results show that our FPGA implementation can generate fairly high performance without sacrificing flexibility for various data-sizes and kernel functions. We demonstrate up to 8X speedup and up to 27 times higher performance-per-watt over a comparable CPU implementation and up to 20% higher performance-per-watt when compared to a relevant GPU implementation.",
"title": ""
},
{
"docid": "1d88a06a34beff2c3e926a6d24f70036",
"text": "Graph-based clustering methods perform clustering on a fixed input data graph. If this initial construction is of low quality then the resulting clustering may also be of low quality. Moreover, existing graph-based clustering methods require post-processing on the data graph to extract the clustering indicators. We address both of these drawbacks by allowing the data graph itself to be adjusted as part of the clustering procedure. In particular, our Constrained Laplacian Rank (CLR) method learns a graph with exactly k connected components (where k is the number of clusters). We develop two versions of this method, based upon the L1-norm and the L2-norm, which yield two new graph-based clustering objectives. We derive optimization algorithms to solve these objectives. Experimental results on synthetic datasets and real-world benchmark datasets exhibit the effectiveness of this new graph-based clustering method. Introduction State-of-the art clustering methods are often based on graphical representations of the relationships among data points. For example, spectral clustering (Ng, Jordan, and Weiss 2001), normalized cut (Shi and Malik 2000) and ratio cut (Hagen and Kahng 1992) all transform the data into a weighted, undirected graph based on pairwise similarities. Clustering is then accomplished by spectral or graphtheoretic optimization procedures. See (Ding and He 2005; Li and Ding 2006) for a discussion of the relations among these graph-based methods, and also the connections to nonnegative matrix factorization. All of these methods involve a two-stage process in which an data graph is formed from the data, and then various optimization procedures are invoked on this fixed input data graph. A disadvantage of this two-stage process is that the final clustering structures are not represented explicitly in the data graph (e.g., graph-cut methods often use K-means algorithm to post-process the ∗To whom all correspondence should be addressed. This work was partially supported by US NSF-IIS 1117965, NSFIIS 1302675, NSF-IIS 1344152, NSF-DBI 1356628, NIH R01 AG049371. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. results to get the clustering indicators); also, the clustering results are dependent on the quality of the input data graph (i.e., they are sensitive to the particular graph construction methods). It seems plausible that a strategy in which the optimization phase is allowed to change the data graph could have advantages relative to the two-phase strategy. In this paper we propose a novel graph-based clustering model that learns a graph with exactly k connected components (where k is the number of clusters). In our new model, instead of fixing the input data graph associated to the affinity matrix, we learn a new data similarity matrix that is a block diagonal matrix and has exactly k connected components—the k clusters. Thus, our new data similarity matrix is directly useful for the clustering task; the clustering results can be immediately obtained without requiring any post-processing to extract the clustering indicators. To achieve such ideal clustering structures, we impose a rank constraint on the Laplacian graph of the new data similarity matrix, thereby guaranteeing the existence of exactly k connected components. Considering both L2-norm and L1norm objectives, we propose two new clustering objectives and derive optimization algorithms to solve them. We also introduce a novel graph-construction method to initialize the graph associated with the affinity matrix. We conduct empirical studies on simulated datasets and seven real-world benchmark datasets to validate our proposed methods. The experimental results are promising— we find that our new graph-based clustering method consistently outperforms other related methods in most cases. Notation: Throughout the paper, all the matrices are written as uppercase. For a matrix M , the i-th row and the ij-th element of M are denoted by mi and mij , respectively. The trace of matrix M is denoted by Tr(M). The L2-norm of vector v is denoted by ‖v‖2, the Frobenius and the L1 norm of matrix M are denoted by ‖M‖F and ‖M‖1, respectively. New Clustering Formulations Graph-based clustering approaches typically optimize their objectives based on a given data graph associated with an affinity matrix A ∈ Rn×n (which can be symmetric or nonsymmetric), where n is the number of nodes (data points) in the graph. There are two drawbacks with these approaches: (1) the clustering performance is sensitive to the quality of the data graph construction; (2) the cluster structures are not explicit in the clustering results and a post-processing step is needed to uncover the clustering indicators. To address these two challenges, we aim to learn a new data graph S based on the given data graph A such that the new data graph is more suitable for the clustering task. In our strategy, we propose to learn a new data graph S that has exactly k connected components, where k is the number of clusters. In order to formulate a clustering objective based on this strategy, we start from the following theorem. If the affinity matrix A is nonnegative, then the Laplacian matrix LA = DA − (A + A)/2, where the degree matrix DA ∈ Rn×n is defined as a diagonal matrix whose i-th diagonal element is ∑ j(aij + aji)/2, has the following important property (Mohar 1991; Chung 1997): Theorem 1 The multiplicity k of the eigenvalue zero of the Laplacian matrix LA is equal to the number of connected components in the graph associated with A. Given a graph with affinity matrix A, Theorem 1 indicates that if rank(LA) = n − k, then the graph is an ideal graph based on which we already partition the data points into k clusters, without the need of performing K-means or other discretization procedures as is necessary with traditional graph-based clustering methods such as spectral clustering. Motivated by Theorem 1, given an initial affinity matrix A ∈ Rn×n, we learn a similarity matrix S ∈ Rn×n such that the corresponding Laplacian matrix LS = DS−(S+S)/2 is constrained to be rank(LS) = n − k. Under this constraint, the learned S is block diagonal with proper permutation, and thus we can directly partition the data points into k clusters based on S (Nie, Wang, and Huang 2014). To avoid the case that some rows of S are all zeros, we further constrain the S such that the sum of each row of S is one. Under these constraints, we learn that S that best approximates the initial affinity matrixA. Considering two different distances, the L2-norm and the L1-norm, between the given affinity matrix A and the learned similarity matrix S, we define the Constrained Laplacian Rank (CLR) for graph-based clustering as the solution to the following optimization problem: JCLR L2 = min ∑ j sij=1,sij≥0,rank(LS)=n−k ‖S −A‖2F (1) JCLR L1 = min ∑ j sij=1,sij≥0,rank(LS)=n−k ‖S −A‖1. (2) These problems seem very difficult to solve since LS = DS − (S +S)/2, and DS also depends on S, and the constraint rank(LS) = n−k is a complex nonlinear constraint. In the next section, we will propose novel and efficient algorithms to solve these problems. Optimization Algorithms Optimization Algorithm for Solving JCLR L2 in Eq. (1) Let σi(LS) denote the i-th smallest eigenvalue of LS . Note that σi(LS) ≥ 0 because LS is positive semidefinite. The problem (1) is equivalent to the following problem for a large enough value of λ: min ∑ j sij=1,sij≥0 ‖S −A‖2F + 2λ k ∑",
"title": ""
},
{
"docid": "0102748c7f9969fb53a3b5ee76b6eefe",
"text": "Face veri cation is the task of deciding by analyzing face images, whether a person is who he/she claims to be. This is very challenging due to image variations in lighting, pose, facial expression, and age. The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face veri cation accuracy. In this paper we propose a new method, named the Cosine Similarity Metric Learning (CSML) for learning a distance metric for facial veri cation. The use of cosine similarity in our method leads to an e ective learning algorithm which can improve the generalization ability of any given metric. Our method is tested on the state-of-the-art dataset, the Labeled Faces in the Wild (LFW), and has achieved the highest accuracy in the literature. Face veri cation has been extensively researched for decades. The reason for its popularity is the non-intrusiveness and wide range of practical applications, such as access control, video surveillance, and telecommunication. The biggest challenge in face veri cation comes from the numerous variations of a face image, due to changes in lighting, pose, facial expression, and age. It is a very di cult problem, especially using images captured in totally uncontrolled environment, for instance, images from surveillance cameras, or from the Web. Over the years, many public face datasets have been created for researchers to advance state of the art and make their methods comparable. This practice has proved to be extremely useful. FERET [1] is the rst popular face dataset freely available to researchers. It was created in 1993 and since then research in face recognition has advanced considerably. Researchers have come very close to fully recognizing all the frontal images in FERET [2,3,4,5,6]. However, these methods are not robust to deal with non-frontal face images. Recently a new face dataset named the Labeled Faces in the Wild (LFW) [7] was created. LFW is a full protocol for evaluating face veri cation algorithms. Unlike FERET, LFW is designed for unconstrained face veri cation. Faces in LFW can vary in all possible ways due to pose, lighting, expression, age, scale, and misalignment (Figure 1). Methods for frontal images cannot cope with these variations and as such many researchers have turned to machine learning to 2 Hieu V. Nguyen and Li Bai Fig. 1. From FERET to LFW develop learning based face veri cation methods [8,9]. One of these approaches is to learn a transformation matrix from the data so that the Euclidean distance can perform better in the new subspace. Learning such a transformation matrix is equivalent to learning a Mahalanobis metric in the original space [10]. Xing et al. [11] used semide nite programming to learn a Mahalanobis distance metric for clustering. Their algorithm aims to minimize the sum of squared distances between similarly labeled inputs, while maintaining a lower bound on the sum of distances between di erently labeled inputs. Goldberger et al. [10] proposed Neighbourhood Component Analysis (NCA), a distance metric learning algorithm especially designed to improve kNN classi cation. The algorithm is to learn a Mahalanobis distance by minimizing the leave-one-out cross validation error of the kNN classi er on a training set. Because it uses softmax activation function to convert distance to probability, the gradient computation step is expensive. Weinberger et al. [12] proposed a method that learns a matrix designed to improve the performance of kNN classi cation. The objective function is composed of two terms. The rst term minimizes the distance between target neighbours. The second term is a hinge-loss that encourages target neighbours to be at least one distance unit closer than points from other classes. It requires information about the class of each sample. As a result, their method is not applicable for the restricted setting in LFW (see section 2.1). Recently, Davis et al. [13] have taken an information theoretic approach to learn a Mahalanobis metric under a wide range of possible constraints and prior knowledge on the Mahalanobis distance. Their method regularizes the learned matrix to make it as close as possible to a known prior matrix. The closeness is measured as a Kullback-Leibler divergence between two Gaussian distributions corresponding to the two matrices. In this paper, we propose a new method named Cosine Similarity Metric Learning (CSML). There are two main contributions. The rst contribution is Cosine Similarity Metric Learning for Face Veri cation 3 that we have shown cosine similarity to be an e ective alternative to Euclidean distance in metric learning problem. The second contribution is that CSML can improve the generalization ability of an existing metric signi cantly in most cases. Our method is di erent from all the above methods in terms of distance measures. All of the other methods use Euclidean distance to measure the dissimilarities between samples in the transformed space whilst our method uses cosine similarity which leads to a simple and e ective metric learning method. The rest of this paper is structured as follows. Section 2 presents CSML method in detail. Section 3 present how CSML can be applied to face veri cation. Experimental results are presented in section 4. Finally, conclusion is given in section 5. 1 Cosine Similarity Metric Learning The general idea is to learn a transformation matrix from training data so that cosine similarity performs well in the transformed subspace. The performance is measured by cross validation error (cve). 1.1 Cosine similarity Cosine similarity (CS) between two vectors x and y is de ned as: CS(x, y) = x y ‖x‖ ‖y‖ Cosine similarity has a special property that makes it suitable for metric learning: the resulting similarity measure is always within the range of −1 and +1. As shown in section 1.3, this property allows the objective function to be simple and e ective. 1.2 Metric learning formulation Let {xi, yi, li}i=1 denote a training set of s labeled samples with pairs of input vectors xi, yi ∈ R and binary class labels li ∈ {1, 0} which indicates whether xi and yi match or not. The goal is to learn a linear transformation A : R → R(d ≤ m), which we will use to compute cosine similarities in the transformed subspace as: CS(x, y,A) = (Ax) (Ay) ‖Ax‖ ‖Ay‖ = xAAy √ xTATAx √ yTATAy Speci cally, we want to learn the linear transformation that minimizes the cross validation error when similarities are measured in this way. We begin by de ning the objective function. 4 Hieu V. Nguyen and Li Bai 1.3 Objective function First, we de ne positive and negative sample index sets Pos and Neg as:",
"title": ""
},
{
"docid": "61ad35eaee012d8c1bddcaeee082fa22",
"text": "For realistic simulation it is necessary to thoroughly define and describe light-source characteristics¿especially the light-source geometry and the luminous intensity distribution.",
"title": ""
},
{
"docid": "54663fcef476f15e2b5261766a19375b",
"text": "In this study, performances of classification techniques were compared in order to predict the presence of coronary artery disease (CAD). A retrospective analysis was performed in 1245 subjects (865 presence of CAD and 380 absence of CAD). We compared performances of logistic regression (LR), classification and regression tree (CART), multi-layer perceptron (MLP), radial basis function (RBF), and self-organizing feature maps (SOFM). Predictor variables were age, sex, family history of CAD, smoking status, diabetes mellitus, systemic hypertension, hypercholesterolemia, and body mass index (BMI). Performances of classification techniques were compared using ROC curve, Hierarchical Cluster Analysis (HCA), and Multidimensional Scaling (MDS). Areas under the ROC curves are 0.783, 0.753, 0.745, 0.721, and 0.675, respectively for MLP, LR, CART, RBF, and SOFM. MLP was found the best technique to predict presence of CAD in this data set, given its good classificatory performance. MLP, CART, LR, and RBF performed better than SOFM in predicting CAD in according to HCA and MDS. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "af9768101a634ab57eb2554953ef63ec",
"text": "Very recently, there has been a perfect storm of technical advances that has culminated in the emergence of a new interaction modality: on-body interfaces. Such systems enable the wearer to use their body as an input and output platform with interactive graphics. Projects such as PALMbit and Skinput sought to answer the initial and fundamental question: whether or not on-body interfaces were technologically possible. Although considerable technical work remains, we believe it is important to begin shifting the question away from how and what, and towards where, and ultimately why. These are the class of questions that inform the design of next generation systems. To better understand and explore this expansive space, we employed a mixed-methods research process involving more than two thousand individuals. This started with high-resolution, but low-detail crowdsourced data. We then combined this with rich, expert interviews, exploring aspects ranging from aesthetics to kinesthetics. The results of this complimentary, structured exploration, point the way towards more comfortable, efficacious, and enjoyable on-body user experiences.",
"title": ""
}
] |
scidocsrr
|
7fabdf6063107d656b2ae326017db1fe
|
Interpersonal influences on adolescent materialism : A new look at the role of parents and peers
|
[
{
"docid": "d602cafe18d720f024da1b36c9283ba5",
"text": "Associations between materialism and peer relations are likely to exist in elementary school children but have not been studied previously. The first two studies introduce a new Perceived Peer Group Pressures (PPGP) Scale suitable for this age group, demonstrating that perceived pressure regarding peer culture (norms for behavioral, attitudinal, and material characteristics) can be reliably measured and that it is connected to children's responses to hypothetical peer pressure vignettes. Studies 3 and 4 evaluate the main theoretical model of associations between peer relations and materialism. Study 3 supports the hypothesis that peer rejection is related to higher perceived peer culture pressure, which in turn is associated with greater materialism. Study 4 confirms that the endorsement of social motives for materialism mediates the relationship between perceived peer pressure and materialism.",
"title": ""
}
] |
[
{
"docid": "49c1924821c326f803cefff58ca7ab67",
"text": "Dynamic binary analysis is a prevalent and indispensable technique in program analysis. While several dynamic binary analysis tools and frameworks have been proposed, all suffer from one or more of: prohibitive performance degradation, a semantic gap between the analysis code and the program being analyzed, architecture/OS specificity, being user-mode only, and lacking APIs. We present DECAF, a virtual machine based, multi-target, whole-system dynamic binary analysis framework built on top of QEMU. DECAF provides Just-In-Time Virtual Machine Introspection and a plugin architecture with a simple-to-use event-driven programming interface. DECAF implements a new instruction-level taint tracking engine at bit granularity, which exercises fine control over the QEMU Tiny Code Generator (TCG) intermediate representation to accomplish on-the-fly optimizations while ensuring that the taint propagation is sound and highly precise. We perform a formal analysis of DECAF's taint propagation rules to verify that most instructions introduce neither false positives nor false negatives. We also present three platform-neutral plugins—Instruction Tracer, Keylogger Detector, and API Tracer, to demonstrate the ease of use and effectiveness of DECAF in writing cross-platform and system-wide analysis tools. Implementation of DECAF consists of 9,550 lines of C++ code and 10,270 lines of C code and we evaluate DECAF using CPU2006 SPEC benchmarks and show average overhead of 605 percent for system wide tainting and 12 percent for VMI.",
"title": ""
},
{
"docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24",
"text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.",
"title": ""
},
{
"docid": "3c444d8918a31831c2dc73985d511985",
"text": "This paper presents methods for collecting and analyzing physiological data during real-world driving tasks to determine a driver's relative stress level. Electrocardiogram, electromyogram, skin conductance, and respiration were recorded continuously while drivers followed a set route through open roads in the greater Boston area. Data from 24 drives of at least 50-min duration were collected for analysis. The data were analyzed in two ways. Analysis I used features from 5-min intervals of data during the rest, highway, and city driving conditions to distinguish three levels of driver stress with an accuracy of over 97% across multiple drivers and driving days. Analysis II compared continuous features, calculated at 1-s intervals throughout the entire drive, with a metric of observable stressors created by independent coders from videotapes. The results show that for most drivers studied, skin conductivity and heart rate metrics are most closely correlated with driver stress level. These findings indicate that physiological signals can provide a metric of driver stress in future cars capable of physiological monitoring. Such a metric could be used to help manage noncritical in-vehicle information systems and could also provide a continuous measure of how different road and traffic conditions affect drivers.",
"title": ""
},
{
"docid": "fa6ec1ff4a0849e5a4ec2dda7b20d966",
"text": "Most digital still cameras acquire imagery with a color filter array (CFA), sampling only one color value for each pixel and interpolating the other two color values afterwards. The interpolation process is commonly known as demosaicking. In general, a good demosaicking method should preserve the high-frequency information of imagery as much as possible, since such information is essential for image visual quality. We discuss in this paper two key observations for preserving high-frequency information in CFA demosaicking: (1) the high frequencies are similar across three color components, and 2) the high frequencies along the horizontal and vertical axes are essential for image quality. Our frequency analysis of CFA samples indicates that filtering a CFA image can better preserve high frequencies than filtering each color component separately. This motivates us to design an efficient filter for estimating the luminance at green pixels of the CFA image and devise an adaptive filtering approach to estimating the luminance at red and blue pixels. Experimental results on simulated CFA images, as well as raw CFA data, verify that the proposed method outperforms the existing state-of-the-art methods both visually and in terms of peak signal-to-noise ratio, at a notably lower computational cost.",
"title": ""
},
{
"docid": "eced59d8ec159f3127e7d2aeca76da96",
"text": "Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face to face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches, such as handheld devices with composited graphics or see-through head worn displays, is that users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction. We detail our prototype system and a number of interactive experiences. We present an initial user experiment that shows that participants are able to deduce the size and distance of a virtual projected object. A second experiment shows that participants are able to infer which of a number of targets the other user indicates by pointing.",
"title": ""
},
{
"docid": "c797b2a78ea6eb434159fd948c0a1bf0",
"text": "Feature extraction is an essential part in applications that require computer vision to recognize objects in an image processed. To extract the features robustly, feature extraction algorithms are often very demanding in computation so that the performance achieved by pure software is far from real-time. Among those feature extraction algorithms, scale-invariant feature transform (SIFT) has gained a lot of popularity recently. In this paper, we propose an all-hardware SIFT accelerator-the fastest of its kind to our knowledge. It consists of two interactive hardware components, one for key point identification, and the other for feature descriptor generation. We successfully developed a segment buffer scheme that could not only feed data to the computing modules in a data-streaming manner, but also reduce about 50% memory requirement than a previous work. With a parallel architecture incorporating a three-stage pipeline, the processing time of the key point identification is only 3.4 ms for one video graphics array (VGA) image. Taking also into account the feature descriptor generation part, the overall SIFT processing time for a VGA image can be kept within 33 ms (to support real-time operation) when the number of feature points to be extracted is fewer than 890.",
"title": ""
},
{
"docid": "d43dc521d3f0f17ccd4840d6081dcbfe",
"text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.",
"title": ""
},
{
"docid": "6a04e07937d1c5beef84acb0a4e0e328",
"text": "Linear hashing and spiral storage are two dynamic hashing schemes originally designed for external files. This paper shows how to adapt these two methods for hash tables stored in main memory. The necessary data structures and algorithms are described, the expected performance is analyzed mathematically, and actual execution times are obtained and compared with alternative techniques. Linear hashing is found to be both faster and easier to implement than spiral storage. Two alternative techniques are considered: a simple unbalanced binary tree and double hashing with periodic rehashing into a larger table. The retrieval time of linear hashing is similar to double hashing and substantially faster than a binary tree, except for very small trees. The loading times of double hashing (with periodic reorganization), a binary tree, and linear hashing are similar. Overall, linear hashing is a simple and efficient technique for applications where the cardinality of the key set is not known in advance.",
"title": ""
},
{
"docid": "6c4433b640cf1d7557b2e74cbd2eee85",
"text": "A compact Ka-band broadband waveguide-based travelingwave spatial power combiner is presented. The low loss micro-strip probes are symmetrically inserted into both broadwalls of waveguide, quadrupling the coupling ways but the insertion loss increases little. The measured 16 dB return-loss bandwidth of the eight-way back-toback structure is from 30 GHz to 39.4 GHz (more than 25%) and the insertion loss is less than 1 dB, which predicts the power-combining efficiency is higher than 90%.",
"title": ""
},
{
"docid": "89349e8f3e7d8df8bb8ab6f55404a91f",
"text": "Due to the high intake of sugars, especially sucrose, global trends in food processing have encouraged producers to use sweeteners, particularly synthetic ones, to a wide extent. For several years, increasing attention has been paid in the literature to the stevia (Stevia rebauidana), containing glycosidic diterpenes, for which sweetening properties have been identified. Chemical composition, nutritional value and application of stevia leaves are briefl y summarized and presented.",
"title": ""
},
{
"docid": "31873424960073962d3d8eba151f6a4b",
"text": "Multiple view data, which have multiple representations from different feature spaces or graph spaces, arise in various data mining applications such as information retrieval, bioinformatics and social network analysis. Since different representations could have very different statistical properties, how to learn a consensus pattern from multiple representations is a challenging problem. In this paper, we propose a general model for multiple view unsupervised learning. The proposed model introduces the concept of mapping function to make the different patterns from different pattern spaces comparable and hence an optimal pattern can be learned from the multiple patterns of multiple representations. Under this model, we formulate two specific models for two important cases of unsupervised learning, clustering and spectral dimensionality reduction; we derive an iterating algorithm for multiple view clustering, and a simple algorithm providing a global optimum to multiple spectral dimensionality reduction. We also extend the proposed model and algorithms to evolutionary clustering and unsupervised learning with side information. Empirical evaluations on both synthetic and real data sets demonstrate the effectiveness of the proposed model and algorithms.",
"title": ""
},
{
"docid": "3323474060ba5f1fbbbdcb152c22a6a9",
"text": "A compact triple-band microstrip slot antenna applied to WLAN/WiMAX applications is proposed in this letter. This antenna has a simpler structure than other antennas designed for realizing triple-band characteristics. It is just composed of a microstrip feed line, a substrate, and a ground plane on which some simple slots are etched. Then, to prove the validation of the design, a prototype is fabricated and measured. The experimental data show that the antenna can provide three impedance bandwidths of 600 MHz centered at 2.7 GHz, 430 MHz centered at 3.5 GHz, and 1300 MHz centered at 5.6 GHz.",
"title": ""
},
{
"docid": "713010fe0ee95840e6001410f8a164cc",
"text": "Three studies tested the idea that when social identity is salient, group-based appraisals elicit specific emotions and action tendencies toward out-groups. Participants' group memberships were made salient and the collective support apparently enjoyed by the in-group was measured or manipulated. The authors then measured anger and fear (Studies 1 and 2) and anger and contempt (Study 3), as well as the desire to move against or away from the out-group. Intergroup anger was distinct from intergroup fear, and the inclination to act against the out-group was distinct from the tendency to move away from it. Participants who perceived the in-group as strong were more likely to experience anger toward the out-group and to desire to take action against it. The effects of perceived in-group strength on offensive action tendencies were mediated by anger.",
"title": ""
},
{
"docid": "a7e8c3a64f6ba977e142de9b3dae7e57",
"text": "Craniofacial superimposition is a process that aims to identify a person by overlaying a photograph and a model of the skull. This process is usually carried out manually by forensic anthropologists; thus being very time consuming and presenting several difficulties in finding a good fit between the 3D model of the skull and the 2D photo of the face. In this paper we present a fast and automatic procedure to tackle the superimposition problem. The proposed method is based on real-coded genetic algorithms. Synthetic data are used to validate the method. Results on a real case from our Physical Anthropology lab of the University of Granada are also presented.",
"title": ""
},
{
"docid": "77cfc86c63ca0a7b3ed3b805ea16b9c9",
"text": "The research presented in this paper is about detecting collaborative networks inside the structure of a research social network. As case study we consider ResearchGate and SEE University academic staff. First we describe the methodology used to crawl and create an academic-academic network depending from their fields of interest. We then calculate and discuss four social network analysis centrality measures (closeness, betweenness, degree, and PageRank) for entities in this network. In addition to these metrics, we have also investigated grouping of individuals, based on automatic clustering depending from their reciprocal relationships.",
"title": ""
},
{
"docid": "7354d8c1e8253a99cfd62d8f96e57a77",
"text": "In the past few decades, clustering has been widely used in areas such as pattern recognition, data analysis, and image processing. Recently, clustering has been recognized as a primary data mining method for knowledge discovery in spatial databases, i.e. databases managing 2D or 3D points, polygons etc. or points in some d-dimensional feature space. The well-known clustering algorithms, however, have some drawbacks when applied to large spatial databases. First, they assume that all objects to be clustered reside in main memory. Second, these methods are too inefficient when applied to large databases. To overcome these limitations, new algorithms have been developed which are surveyed in this paper. These algorithms make use of efficient query processing techniques provided by spatial database systems.",
"title": ""
},
{
"docid": "23493c14053a4608203f8e77bd899445",
"text": "In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram (EEG) signals are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of “lossy plus residual coding,” consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. The compression algorithms are applied to three different EEG datasets, each with different sampling rate and resolution. The proposed multichannel compression algorithms achieve attractive compression ratios compared to algorithms that compress individual channels separately.",
"title": ""
},
{
"docid": "3a3c0c21d94c2469bd95a103a9984354",
"text": "Recently it was shown that the problem of Maximum Inner Product Search (MIPS) is efficient and it admits provably sub-linear hashing algorithms. Asymmetric transformations before hashing were the key in solving MIPS which was otherwise hard. In [18], the authors use asymmetric transformations which convert the problem of approximate MIPS into the problem of approximate near neighbor search which can be efficiently solved using hashing. In this work, we provide a different transformation which converts the problem of approximate MIPS into the problem of approximate cosine similarity search which can be efficiently solved using signed random projections. Theoretical analysis show that the new scheme is significantly better than the original scheme for MIPS. Experimental evaluations strongly support the theoretical findings.",
"title": ""
}
] |
scidocsrr
|
d1fed528c5a08bb4995f74ffe1391fa8
|
Structure and function of auditory cortex: music and speech
|
[
{
"docid": "a411780d406e8b720303d18cd6c9df68",
"text": "Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.",
"title": ""
}
] |
[
{
"docid": "a24b4546eb2da7ce6ce70f45cd16e07d",
"text": "This paper examines the state of the art in mobile clinical and health-related apps. A 2012 estimate puts the number of health-related apps at no fewer than 40,000, as healthcare professionals and consumers continue to express concerns about the quality of many apps, calling for some form of app regulatory control or certification to be put in place. We describe the range of apps on offer as of 2013, and then present a brief survey of evaluation studies of medical and health-related apps that have been conducted to date, covering a range of clinical disciplines and topics. Our survey includes studies that highlighted risks, negative issues and worrying deficiencies in existing apps. We discuss the concept of 'apps as a medical device' and the relevant regulatory controls that apply in USA and Europe, offering examples of apps that have been formally approved using these mechanisms. We describe the online Health Apps Library run by the National Health Service in England and the calls for a vetted medical and health app store. We discuss the ingredients for successful apps beyond the rather narrow definition of 'apps as a medical device'. These ingredients cover app content quality, usability, the need to match apps to consumers' general and health literacy levels, device connectivity standards (for apps that connect to glucometers, blood pressure monitors, etc.), as well as app security and user privacy. 'Happtique Health App Certification Program' (HACP), a voluntary app certification scheme, successfully captures most of these desiderata, but is solely focused on apps targeting the US market. HACP, while very welcome, is in ways reminiscent of the early days of the Web, when many \"similar\" quality benchmarking tools and codes of conduct for information publishers were proposed to appraise and rate online medical and health information. It is probably impossible to rate and police every app on offer today, much like in those early days of the Web, when people quickly realised the same regarding informational Web pages. The best first line of defence was, is, and will always be to educate consumers regarding the potentially harmful content of (some) apps.",
"title": ""
},
{
"docid": "6d1f374686b98106ab4221066607721b",
"text": "How does one instigate a scientific revolution, or more modestly, a shift of scientific paradigm? This must have been on the minds of the organizers of the two conferences \"The Economy as an Evolving Complex System, I and II\" and the research program in economics at the Santa Fe Institute documented in the present volume and its predecessor of ten years ago.(1) Their strategy might be reconstructed as follows. First, the stranglehold of neoclassical economics on the Anglo-Saxon academic community since World War II is at least partly due to the ascendancy of mathematical rigor as the touchstone of serious economic theorizing. Thus if one could beat the prevailing paradigm at its own game one would immediately have a better footing in the community than the heretics, mostly from the left or one of the variousìnstitu-tional' camps, who had been sniping at it from the sidelines all the while but were never above the suspicion of not being mathematically up to comprehending it in the first place. Second, one could enlist both prominent representatives and path-breaking methods from the natural sciences to legitimize the introduction of (to economists) fresh and in some ways disturbing approaches to the subject. This was particularly the tack taken in 1987, where roughly equal numbers of scientists and economists were brought together in an extensive brain storming session. Physics has always been the role model for other aspiring`hard' sciences, and physicists seem to have succeeded in institutional-izing a `permanent revolution' in their own methodology , i.e., they are relatively less dogmatic and willing to be more eclectic in the interests of getting results. The fact that, with the exception of a brief chapter by Philip Anderson in the present volume, physicists as representatives of their discipline are no longer present, presumably indicates that their services can now be dispensed with in this enterprise.(2) Finally, one should sponsor research of the highest caliber, always laudable in itself, and make judicious use of key personalities. Care should also be taken that the work is of a form and style which, rather than explicitly provoking the profession, makes it appear as if it were the natural generalization of previous mainstream research and thus reasonably amenable to inclusion in the canon. This while tacitly encouraging and profiting from a wave of publicity in the popular media , a difficult line to tread if one does not want to appear …",
"title": ""
},
{
"docid": "e0c71e449f4c155a993ae04ece4bc822",
"text": "This paper shows how one can directly apply natural language processing (NLP) methods to classification problems in cheminformatics. Connection between these seemingly separate fields is shown by considering standard textual representation of compound, SMILES. The problem of activity prediction against a target protein is considered, which is a crucial part of computer aided drug design process. Conducted experiments show that this way one can not only outrank state of the art results of hand crafted representations but also gets direct structural insights into the way decisions are made.",
"title": ""
},
{
"docid": "f4b6f3b281a420999b60b38c245113a6",
"text": "There is growing interest in using intranasal oxytocin (OT) to treat social dysfunction in schizophrenia and bipolar disorders (i.e., psychotic disorders). While OT treatment results have been mixed, emerging evidence suggests that OT system dysfunction may also play a role in the etiology of metabolic syndrome (MetS), which appears in one-third of individuals with psychotic disorders and associated with increased mortality. Here we examine the evidence for a potential role of the OT system in the shared risk for MetS and psychotic disorders, and its prospects for ameliorating MetS. Using several studies to demonstrate the overlapping neurobiological profiles of metabolic risk factors and psychiatric symptoms, we show that OT system dysfunction may be one common mechanism underlying MetS and psychotic disorders. Given the critical need to better understand metabolic dysregulation in these disorders, future OT trials assessing behavioural and cognitive outcomes should additionally include metabolic risk factor parameters.",
"title": ""
},
{
"docid": "8612b5e8f00fd8469ba87f1514b69fd0",
"text": "Online gaming is one of the most profitable businesses on the Internet. Among various threats to continuous player subscriptions, network lags are particularly notorious. It is widely known that frequent and long lags frustrate game players, but whether the players actually take action and leave a game is unclear. Motivated to answer this question, we apply survival analysis to a 1, 356-million-packet trace from a sizeable MMORPG, called ShenZhou Online. We find that both network delay and network loss significantly affect a player’s willingness to continue a game. For ShenZhou Online, the degrees of player “intolerance” of minimum RTT, RTT jitter, client loss rate, and server loss rate are in the proportion of 1:2:11:6. This indicates that 1) while many network games provide “ping time,” i.e., the RTT, to players to facilitate server selection, it would be more useful to provide information about delay jitters; and 2) players are much less tolerant of network loss than delay. This is due to the game designer’s decision to transfer data in TCP, where packet loss not only results in additional packet delays due to in-order delivery and retransmission, but also a lower sending rate.",
"title": ""
},
{
"docid": "63663dbc320556f7de09b5060f3815a6",
"text": "There has been a long history of applying AI technologies to address software engineering problems especially on tool automation. On the other hand, given the increasing importance and popularity of AI software, recent research efforts have been on exploring software engineering solutions to improve the productivity of developing AI software and the dependability of AI software. The emerging field of intelligent software engineering is to focus on two aspects: (1) instilling intelligence in solutions for software engineering problems; (2) providing software engineering solutions for intelligent software. This extended abstract shares perspectives on these two aspects of intelligent software engineering.",
"title": ""
},
{
"docid": "ddc56e9f2cbe9c086089870ccec7e510",
"text": "Serotonin is an ancient monoamine neurotransmitter, biochemically derived from tryptophan. It is most abundant in the gastrointestinal tract, but is also present throughout the rest of the body of animals and can even be found in plants and fungi. Serotonin is especially famous for its contributions to feelings of well-being and happiness. More specifically it is involved in learning and memory processes and is hence crucial for certain behaviors throughout the animal kingdom. This brief review will focus on the metabolism, biological role and mode-of-action of serotonin in insects. First, some general aspects of biosynthesis and break-down of serotonin in insects will be discussed, followed by an overview of the functions of serotonin, serotonin receptors and their pharmacology. Throughout this review comparisons are made with the vertebrate serotonergic system. Last but not least, possible applications of pharmacological adjustments of serotonin signaling in insects are discussed.",
"title": ""
},
{
"docid": "83aa2a89f8ecae6a84134a2736a5bb22",
"text": "The activity of dozens of simultaneously recorded neurons can be used to control the movement of a robotic arm or a cursor on a computer screen. This motor neural prosthetic technology has spurred an increased interest in the algorithms by which motor intention can be inferred. The simplest of these algorithms is the population vector algorithm (PVA), where the activity of each cell is used to weight a vector pointing in that neuron's preferred direction. Off-line, it is possible to show that more complicated algorithms, such as the optimal linear estimator (OLE), can yield substantial improvements in the accuracy of reconstructed hand movements over the PVA. We call this open-loop performance. In contrast, this performance difference may not be present in closed-loop, on-line control. The obvious difference between open and closed-loop control is the ability to adapt to the specifics of the decoder in use at the time. In order to predict performance gains that an algorithm may yield in closed-loop control, it is necessary to build a model that captures aspects of this adaptation process. Here we present a framework for modeling the closed-loop performance of the PVA and the OLE. Using both simulations and experiments, we show that (1) the performance gain with certain decoders can be far less extreme than predicted by off-line results, (2) that subjects are able to compensate for certain types of bias in decoders, and (3) that care must be taken to ensure that estimation error does not degrade the performance of theoretically optimal decoders.",
"title": ""
},
{
"docid": "7d8884a7f6137068f8ede464cf63da5b",
"text": "Object detection and localization is a crucial step for inspection and manipulation tasks in robotic and industrial applications. We present an object detection and localization scheme for 3D objects that combines intensity and depth data. A novel multimodal, scale- and rotation-invariant feature is used to simultaneously describe the object's silhouette and surface appearance. The object's position is determined by matching scene and model features via a Hough-like local voting scheme. The proposed method is quantitatively and qualitatively evaluated on a large number of real sequences, proving that it is generic and highly robust to occlusions and clutter. Comparisons with state of the art methods demonstrate comparable results and higher robustness with respect to occlusions.",
"title": ""
},
{
"docid": "850becfa308ce7e93fea77673db8ab50",
"text": "Controlled generation of text is of high practical use. Recent efforts have made impressive progress in generating or editing sentences with given textual attributes (e.g., sentiment). This work studies a new practical setting of text content manipulation. Given a structured record, such as (PLAYER: Lebron, POINTS: 20, ASSISTS: 10), and a reference sentence, such as Kobe easily dropped 30 points, we aim to generate a sentence that accurately describes the full content in the record, with the same writing style (e.g., wording, transitions) of the reference. The problem is unsupervised due to lack of parallel data in practice, and is challenging to minimally yet effectively manipulate the text (by rewriting/adding/deleting text portions) to ensure fidelity to the structured content. We derive a dataset from a basketball game report corpus as our testbed, and develop a neural method with unsupervised competing objectives and explicit content coverage constraints. Automatic and human evaluations show superiority of our approach over competitive methods including a strong rule-based baseline and prior approaches designed for style transfer.",
"title": ""
},
{
"docid": "7e127a6f25e932a67f333679b0d99567",
"text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.",
"title": ""
},
{
"docid": "ff1ed09b9952f9d0b67d6f6bb1cd507a",
"text": "Microblogging websites have emerged to the center of information production and diffusion, on which people can get useful information from other users’ microblog posts. In the era of Big Data, we are overwhelmed by the large amount of microblog posts. To make good use of these informative data, an effective search tool is required specialized for microblog posts. However, it is not trivial to do microblog search due to the following reasons: 1) microblog posts are noisy and time-sensitive rendering general information retrieval models ineffective. 2) Conventional IR models are not designed to consider microblog-specific features. In this paper, we propose to utilize learning to rank model for microblog search. We combine content-based, microblog-specific and temporal features into learning to rank models, which are found to model microblog posts effectively. To study the performance of learning to rank models, we evaluate our models using tweet data set provided by TERC 2011 and TREC 2012 microblogs track with the comparison of three stateof-the-art information retrieval baselines, vector space model, language model, BM25 model. Extensive experimental studies demonstrate the effectiveness of learning to rank models and the usefulness to integrate microblog-specific and temporal information for microblog search task.",
"title": ""
},
{
"docid": "d4a96cc393a3f1ca3bca94a57e07941e",
"text": "With the increasing number of scientific publications, research paper recommendation has become increasingly important for scientists. Most researchers rely on keyword-based search or following citations in other papers, in order to find relevant research articles. And usually they spend a lot of time without getting satisfactory results. This study aims to propose a personalized research paper recommendation system, that facilitate this task by recommending papers based on users' explicit and implicit feedback. The users will be allowed to explicitly specify the papers of interest. In addition, user activities (e.g., viewing abstracts or full-texts) will be analyzed in order to enhance users' profiles. Most of the current research paper recommendation and information retrieval systems use the classical bag-of-words methods, which don't consider the context of the words and the semantic similarity between the articles. This study will use Recurrent Neural Networks (RNNs) to discover continuous and latent semantic features of the papers, in order to improve the recommendation quality. The proposed approach utilizes PubMed so far, since it is frequently used by physicians and scientists, but it can easily incorporate other datasets in the future.",
"title": ""
},
{
"docid": "188c55ef248f7021a66c1f2e05c2fc98",
"text": "The objective of the proposed study is to explore the performance of credit scoring using a two-stage hybrid modeling procedure with artificial neural networks and multivariate adaptive regression splines (MARS). The rationale under the analyses is firstly to use MARS in building the credit scoring model, the obtained significant variables are then served as the input nodes of the neural networks model. To demonstrate the effectiveness and feasibility of the proposed modeling procedure, credit scoring tasks are performed on one bank housing loan dataset using cross-validation approach. As the results reveal, the proposed hybrid approach outperforms the results using discriminant analysis, logistic regression, artificial neural networks and MARS and hence provides an alternative in handling credit scoring tasks. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "70b6abe2cb82eead9235612c1a1998d7",
"text": "PURPOSE\nThe aim of the study was to investigate white blood cell counts and neutrophil to lymphocyte ratio (NLR) as markers of systemic inflammation in the diagnosis of localized testicular cancer as a malignancy with initially low volume.\n\n\nMATERIALS AND METHODS\nThirty-six patients with localized testicular cancer with a mean age of 34.22±14.89 years and 36 healthy controls with a mean age of 26.67±2.89 years were enrolled in the study. White blood cell counts and NLR were calculated from complete blood cell counts.\n\n\nRESULTS\nWhite blood cell counts and NLR were statistically significantly higher in patients with testicular cancer compared with the control group (p<0.0001 for all).\n\n\nCONCLUSIONS\nBoth white blood cell counts and NLR can be used as a simple test in the diagnosis of testicular cancer besides the well-known accurate serum tumor markers as AFP (alpha fetoprotein), hCG (human chorionic gonadotropin) and LDH (lactate dehydrogenase).",
"title": ""
},
{
"docid": "655413f10d0b99afd15d54d500c9ffb6",
"text": "Herbal medicine (phytomedicine) uses remedies possessing significant pharmacological activity and, consequently, potential adverse effects and drug interactions. The explosion in sales of herbal therapies has brought many products to the marketplace that do not conform to the standards of safety and efficacy that physicians and patients expect. Unfortunately, few surgeons question patients regarding their use of herbal medicines, and 70% of patients do not reveal their use of herbal medicines to their physicians and pharmacists. All surgeons should question patients about the use of the following common herbal remedies, which may increase the risk of bleeding during surgical procedures: feverfew, garlic, ginger, ginkgo, and Asian ginseng. Physicians should exercise caution in prescribing retinoids or advising skin resurfacing in patients using St John's wort, which poses a risk of photosensitivity reaction. Several herbal medicines, such as aloe vera gel, contain pharmacologically active ingredients that may aid in wound healing. Practitioners who wish to recommend herbal medicines to patients should counsel them that products labeled as supplements have not been evaluated by the US Food and Drug Administration and that no guarantee of product quality can be made.",
"title": ""
},
{
"docid": "5c45aa22bb7182259f75260c879f81d6",
"text": "This paper presents an approach to parsing the Manhattan structure of an indoor scene from a single RGBD frame. The problem of recovering the floor plan is recast as an optimal labeling problem which can be solved efficiently using Dynamic Programming.",
"title": ""
},
{
"docid": "0bba0afb68f80afad03d0ba3d1ce9c89",
"text": "The Luneburg lens is an aberration-free lens that focuses light from all directions equally well. We fabricated and tested a Luneburg lens in silicon photonics. Such fully-integrated lenses may become the building blocks of compact Fourier optics on chips. Furthermore, our fabrication technique is sufficiently versatile for making perfect imaging devices on silicon platforms.",
"title": ""
},
{
"docid": "89ed5dc0feb110eb3abc102c4e50acaf",
"text": "Automatic object detection in infrared images is a vital task for many military defense systems. The high detection rate and low false detection rate of this phase directly affect the performance of the following algorithms in the system as well as the general performance of the system. In this work, a fast and robust algorithm is proposed for detection of small and high intensity objects in infrared scenes. Top-hat transformation and mean filter was used to increase the visibility of the objects, and a two-layer thresholding algorithm was introduced to calculate the object sizes more accurately. Finally, small objects extracted by using post processing methods.",
"title": ""
}
] |
scidocsrr
|
8f514b69680f77c0cd9f0ab33a16e225
|
Sparse Non-negative Matrix Factorization (SNMF) based color unmixing for breast histopathological image analysis
|
[
{
"docid": "882f2fa1782d530bbc2cbccdd5a194bd",
"text": "Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement was 94.31 ± 3.04%, 1.12 ± 0.69 mm and 3.65 ± 1.40 mm respectively.",
"title": ""
},
{
"docid": "1de10e40580ba019045baaa485f8e729",
"text": "Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.",
"title": ""
}
] |
[
{
"docid": "4c410bb0390cc4611da4df489c89fca0",
"text": "In this work, we propose a generalized product of experts (gPoE) framework for combining the predictions of multiple probabilistic models. We identify four desirable properties that are important for scalability, expressiveness and robustness, when learning and inferring with a combination of multiple models. Through analysis and experiments, we show that gPoE of Gaussian processes (GP) have these qualities, while no other existing combination schemes satisfy all of them at the same time. The resulting GP-gPoE is highly scalable as individual GP experts can be independently learned in parallel; very expressive as the way experts are combined depends on the input rather than fixed; the combined prediction is still a valid probabilistic model with natural interpretation; and finally robust to unreliable predictions from individual experts.",
"title": ""
},
{
"docid": "fb9c0650f5ac820eef3df65b7de1ff12",
"text": "Since 2013, a number of studies have enhanced the literature and have guided clinicians on viable treatment interventions outside of pharmacotherapy and surgery. Thirty-three randomized controlled trials and one large observational study on exercise and physiotherapy were published in this period. Four randomized controlled trials focused on dance interventions, eight on treatment of cognition and behavior, two on occupational therapy, and two on speech and language therapy (the latter two specifically addressed dysphagia). Three randomized controlled trials focused on multidisciplinary care models, one study on telemedicine, and four studies on alternative interventions, including music therapy and mindfulness. These studies attest to the marked interest in these therapeutic approaches and the increasing evidence base that places nonpharmacological treatments firmly within the integrated repertoire of treatment options in Parkinson's disease.",
"title": ""
},
{
"docid": "605e478250d1c49107071e47a9cb00df",
"text": "In line with the increasing use of sensors and health application, there are huge efforts on processing of collected data to extract valuable information such as accelerometer data. This study will propose activity recognition model aim to detect the activities by employing ensemble of classifiers techniques using the Wireless Sensor Data Mining (WISDM). The model will recognize six activities namely walking, jogging, upstairs, downstairs, sitting, and standing. Many experiments are conducted to determine the best classifier combination for activity recognition. An improvement is observed in the performance when the classifiers are combined than when used individually. An ensemble model is built using AdaBoost in combination with decision tree algorithm C4.5. The model effectively enhances the performance with an accuracy level of 94.04 %. Keywords—Activity Recognition; Sensors; Smart phones; accelerometer data; Data mining; Ensemble",
"title": ""
},
{
"docid": "63e58ac7e6f3b4a463e8f8182fee9be5",
"text": "In this work, we propose “global style tokens” (GSTs), a bank of embeddings that are jointly trained within Tacotron, a state-of-the-art end-toend speech synthesis system. The embeddings are trained with no explicit labels, yet learn to model a large range of acoustic expressiveness. GSTs lead to a rich set of significant results. The soft interpretable “labels” they generate can be used to control synthesis in novel ways, such as varying speed and speaking style – independently of the text content. They can also be used for style transfer, replicating the speaking style of a single audio clip across an entire long-form text corpus. When trained on noisy, unlabeled found data, GSTs learn to factorize noise and speaker identity, providing a path towards highly scalable but robust speech synthesis.",
"title": ""
},
{
"docid": "3cdbc153caaafcea54228b0c847aa536",
"text": "BACKGROUND\nAlthough the use of filling agents for soft-tissue augmentation has increased worldwide, most consensus statements do not distinguish between ethnic populations. There are, however, significant differences between Caucasian and Asian faces, reflecting not only cultural disparities, but also distinctive treatment goals. Unlike aesthetic patients in the West, who usually seek to improve the signs of aging, Asian patients are younger and request a broader range of indications.\n\n\nMETHODS\nMembers of the Asia-Pacific Consensus group-comprising specialists from the fields of dermatology, plastic surgery, anatomy, and clinical epidemiology-convened to develop consensus recommendations for Asians based on their own experience using cohesive polydensified matrix, hyaluronic acid, and calcium hydroxylapatite fillers.\n\n\nRESULTS\nThe Asian face demonstrates differences in facial structure and cosmetic ideals. Improving the forward projection of the \"T zone\" (i.e., forehead, nose, cheeks, and chin) forms the basis of a safe and effective panfacial approach to the Asian face. Successful augmentation may be achieved with both (1) high- and low-viscosity cohesive polydensified matrix/hyaluronic acid and (2) calcium hydroxylapatite for most indications, although some constraints apply.\n\n\nCONCLUSION\nThe Asia-Pacific Consensus recommendations are the first developed specifically for the use of fillers in Asian populations.\n\n\nCLINCIAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.",
"title": ""
},
{
"docid": "a0e243a0edd585303a84fda47b1ae1e1",
"text": "Generative Adversarial Networks (GANs) have shown great promise recently in image generation. Training GANs for language generation has proven to be more difficult, because of the non-differentiable nature of generating text with recurrent neural networks. Consequently, past work has either resorted to pre-training with maximum-likelihood or used convolutional networks for generation. In this work, we show that recurrent neural networks can be trained to generate text with GANs from scratch using curriculum learning, by slowly teaching the model to generate sequences of increasing and variable length. We empirically show that our approach vastly improves the quality of generated sequences compared to a convolutional baseline. 1",
"title": ""
},
{
"docid": "45bd038dd94d388f945c041e7c04b725",
"text": "Entomophagy is widespread among nonhuman primates and is common among many human communities. However, the extent and patterns of entomophagy vary substantially both in humans and nonhuman primates. Here we synthesize the literature to examine why humans and other primates eat insects and what accounts for the variation in the extent to which they do so. Variation in the availability of insects is clearly important, but less understood is the role of nutrients in entomophagy. We apply a multidimensional analytical approach, the right-angled mixture triangle, to published data on the macronutrient compositions of insects to address this. Results showed that insects eaten by humans spanned a wide range of protein-to-fat ratios but were generally nutrient dense, whereas insects with high protein-to-fat ratios were eaten by nonhuman primates. Although suggestive, our survey exposes a need for additional, standardized, data.",
"title": ""
},
{
"docid": "939cd6055f850b8fdb6ba869d375cf25",
"text": "...although PPP lessons are often supplemented with skills lessons, most students taught mainly through conventional approaches such as PPP leave school unable to communicate effectively in English (Stern, 1983). This situation has prompted many ELT professionals to take note of... second language acquisition (SLA) studies... and turn towards holistic approaches where meaning is central and where opportunities for language use abound. Task-based learning is one such approach...",
"title": ""
},
{
"docid": "e623ce85fdeead09fa746e9ae793806e",
"text": "In this paper, we aim to construct a deep neural network which embeds high dimensional symmetric positive definite (SPD) matrices into a more discriminative low dimensional SPD manifold. To this end, we develop two types of basic layers: a 2D fully connected layer which reduces the dimensionality of the SPD matrices, and a symmetrically clean layer which achieves non-linear mapping. Specifically, we extend the classical fully connected layer such that it is suitable for SPD matrices, and we further show that SPD matrices with symmetric pair elements setting zero operations are still symmetric positive definite. Finally, we complete the construction of the deep neural network for SPD manifold learning by stacking the two layers. Experiments on several face datasets demonstrate the effectiveness of the proposed method. Introduction Symmetric positive definite (SPD) matrices have shown powerful representation abilities of encoding image and video information. In computer vision community, the SPD matrix representation has been widely employed in many applications, such as face recognition (Pang, Yuan, and Li 2008; Huang et al. 2015; Wu et al. 2015; Li et al. 2015), object recognition (Tuzel, Porikli, and Meer 2006; Jayasumana et al. 2013; Harandi, Salzmann, and Hartley 2014; Yin et al. 2016), action recognition (Harandi et al. 2016), and visual tracking (Wu et al. 2015). The SPD matrices form a Riemannian manifold, where the Euclidean distance is no longer a suitable metric. Previous works on analyzing the SPD manifold mainly fall into two categories: the local approximation method and the kernel method, as shown in Figure 1(a). The local approximation method (Tuzel, Porikli, and Meer 2006; Sivalingam et al. 2009; Tosato et al. 2010; Carreira et al. 2012; Vemulapalli and Jacobs 2015) locally flattens the manifold and approximates the SPD matrix by a point of the tangent space. The kernel method (Harandi et al. 2012; Wang et al. 2012; Jayasumana et al. 2013; Li et al. 2013; Quang, San Biagio, and Murino 2014; Yin et al. 2016) embeds the manifold into a higher dimensional Reproducing Kernel Hilbert Space (RKHS) via kernel functions. On new ∗corresponding author Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. SPD Manifold Tangent Space",
"title": ""
},
{
"docid": "90e5fc05d96e84668816eb70a06ab709",
"text": "This paper introduces a cooperative parallel metaheuristic for solving the capacitated vehicle routing problem. The proposed metaheuristic consists of multiple parallel tabu search threads that cooperate by asynchronously exchanging best found solutions through a common solution pool. The solutions sent to the pool are clustered according to their similarities. The search history information identified from the solution clusters is applied to guide the intensification or diversification of the tabu search threads. Computational experiments on two sets of large scale benchmarks from the literature demonstrate that the suggested metaheuristic is highly competitive, providing new best solutions to ten of those well-studied instances.",
"title": ""
},
{
"docid": "8ec018e0fc4ca7220387854bdd034a58",
"text": "Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.",
"title": ""
},
{
"docid": "a45109840baf74c61b5b6b8f34ac81d5",
"text": "Decision-making groups can potentially benefit from pooling members' information, particularly when members individually have partial and biased information but collectively can compose an unbiased characterization of the decision alternatives. The proposed biased sampling model of group discussion, however, suggests that group members often fail to effectively pool their information because discussion tends to be dominated by (a) information that members hold in common before discussion and (b) information that supports members' existent preferences. In a political caucus simulation, group members individually read candidate descriptions that contained partial information biased against the most favorable candidate and then discussed the candidates as a group. Even though groups could have produced unbiased composites of the candidates through discussion, they decided in favor of the candidate initially preferred by a plurality rather than the most favorable candidate. Group members' preand postdiscussion recall of candidate attributes indicated that discussion tended to perpetuate, not to correct, members' distorted pictures of the candidates.",
"title": ""
},
{
"docid": "a33d982b4dde7c22ffc3c26214b35966",
"text": "Background: In most cases, bug resolution is a collaborative activity among developers in software development where each developer contributes his or her ideas on how to resolve the bug. Although only one developer is recorded as the actual fixer for the bug, the contribution of the developers who participated in the collaboration cannot be neglected.\n Aims: This paper proposes a new approach, called DRETOM (Developer REcommendation based on TOpic Models), to recommending developers for bug resolution in collaborative behavior.\n Method: The proposed approach models developers' interest in and expertise on bug resolving activities based on topic models that are built from their historical bug resolving records. Given a new bug report, DRETOM recommends a ranked list of developers who are potential to participate in and contribute to resolving the new bug according to these developers' interest in and expertise on resolving it.\n Results: Experimental results on Eclipse JDT and Mozilla Firefox projects show that DRETOM can achieve high recall up to 82% and 50% with top 5 and top 7 recommendations respectively.\n Conclusion: Developers' interest in bug resolving activities should be taken into consideration. On condition that the parameter θ of DRETOM is set properly with trials, the proposed approach is practically useful in terms of recall.",
"title": ""
},
{
"docid": "b63077105e140546a7485167339fdf62",
"text": "Deep multi-layer perceptron neural networks are used in many state-of-the-art systems for machine perception (e.g., speech-to-text, image classification, and object detection). Once a network is trained to do a specific task, e.g., finegrained bird classification, it cannot easily be trained to do new tasks, e.g., incrementally learning to recognize additional bird species or learning an entirely different task such as finegrained flower recognition. When new tasks are added, deep neural networks are prone to catastrophically forgetting previously learned information. Catastrophic forgetting has hindered the use of neural networks in deployed applications that require lifelong learning. There have been multiple attempts to develop schemes that mitigate catastrophic forgetting, but these methods have yet to be compared and the kinds of tests used to evaluate individual methods vary greatly. In this paper, we compare multiple mechanisms designed to mitigate catastrophic forgetting in neural networks. Experiments showed that the mechanism(s) that are critical for optimal performance vary based on the incremental training paradigm and type of data being used.",
"title": ""
},
{
"docid": "e6d79e4a616c4913b605bc3c2a6f8776",
"text": "In this paper an alternative approach to solve uncertain Stochastic Differential Equation (SDE) is proposed. This uncertainty occurs due to the involved parameters in system and these are considered as Triangular Fuzzy Numbers (TFN). Here the proposed fuzzy arithmetic in [2] is used as a tool to handle Fuzzy Stochastic Differential Equation (FSDE). In particular, a system of Ito stochastic differential equations is analysed with fuzzy parameters. Further exact and Euler Maruyama approximation methods with fuzzy values are demonstrated and solved some standard SDE.",
"title": ""
},
{
"docid": "32e01378d68ae1610f538e60edf24d9a",
"text": "Generating texts from structured data (e.g., a table) is important for various natural language processing tasks such as question answering and dialog systems. In recent studies, researchers use neural language models and encoder-decoder frameworks for table-to-text generation. However, these neural network-based approaches typically do not model the order of content during text generation. When a human writes a summary based on a given table, he or she would probably consider the content order before wording. In this paper, we propose an order-planning text generation model, where order information is explicitly captured by link-based attention. Then a self-adaptive gate combines the link-based attention with traditional content-based attention. We conducted experiments on the WIKIBIO dataset and achieve higher performance than previous methods in terms of BLEU, ROUGE, and NIST scores; we also performed ablation tests to analyze each component of our model.",
"title": ""
},
{
"docid": "228b94be5c79161343376360cd35db6f",
"text": "Linked Data is proclaimed as the Semantic Web done right. The Semantic Web is an incomplete dream so far, but a homogeneous revolutionary platform as a network of Blockchains could be the solution to this not optimal reality. This research paper introduces some initial hints and ideas about how a futuristic Internet that might be composed and powered by Blockchains networks would be constructed and designed to interconnect data and meaning, thus allow reasoning. An industrial application where Blockchain and Linked Data fits perfectly as a Supply Chain management system is also researched.",
"title": ""
},
{
"docid": "43cdcbfaca6c69cdb8652761f7e8b140",
"text": "Aggregation of local features is a well-studied approach for image as well as 3D model retrieval (3DMR). A carefully designed local 3D geometric feature is able to describe detailed local geometry of 3D model, often with invariance to geometric transformations that include 3D rotation of local 3D regions. For efficient 3DMR, these local features are aggregated into a feature per 3D model. A recent alternative, end-toend 3D Deep Convolutional Neural Network (3D-DCNN) [7][33], has achieved accuracy superior to the abovementioned aggregation-of-local-features approach. However, current 3D-DCNN based methods have weaknesses; they lack invariance against 3D rotation, and they often miss detailed geometrical features due to their quantization of shapes into coarse voxels in applying 3D-DCNN. In this paper, we propose a novel deep neural network for 3DMR called Deep Local feature Aggregation Network (DLAN) that combines extraction of rotation-invariant 3D local features and their aggregation in a single deep architecture. The DLAN describes local 3D regions of a 3D model by using a set of 3D geometric features invariant to local rotation. The DLAN then aggregates the set of features into a (global) rotation-invariant and compact feature per 3D model. Experimental evaluation shows that the DLAN outperforms the existing deep learning-based 3DMR algorithms.",
"title": ""
},
{
"docid": "cc5b1a8100e8d4d7be5dfb80c4866aab",
"text": "A fundamental characteristic of multicellular organisms is the specialization of functional cell types through the process of differentiation. These specialized cell types not only characterize the normal functioning of different organs and tissues, they can also be used as cellular biomarkers of a variety of different disease states and therapeutic/vaccine responses. In order to serve as a reference for cell type representation, the Cell Ontology has been developed to provide a standard nomenclature of defined cell types for comparative analysis and biomarker discovery. Historically, these cell types have been defined based on unique cellular shapes and structures, anatomic locations, and marker protein expression. However, we are now experiencing a revolution in cellular characterization resulting from the application of new high-throughput, high-content cytometry and sequencing technologies. The resulting explosion in the number of distinct cell types being identified is challenging the current paradigm for cell type definition in the Cell Ontology. In this paper, we provide examples of state-of-the-art cellular biomarker characterization using high-content cytometry and single cell RNA sequencing, and present strategies for standardized cell type representations based on the data outputs from these cutting-edge technologies, including “context annotations” in the form of standardized experiment metadata about the specimen source analyzed and marker genes that serve as the most useful features in machine learning-based cell type classification models. We also propose a statistical strategy for comparing new experiment data to these standardized cell type representations. The advent of high-throughput/high-content single cell technologies is leading to an explosion in the number of distinct cell types being identified. It will be critical for the bioinformatics community to develop and adopt data standard conventions that will be compatible with these new technologies and support the data representation needs of the research community. The proposals enumerated here will serve as a useful starting point to address these challenges.",
"title": ""
},
{
"docid": "199079ff97d1a48819f8185c2ef23472",
"text": "Identifying domain-dependent opinion words is a key problem in opinion mining and has been studied by several researchers. However, existing work has been focused on adjectives and to some extent verbs. Limited work has been done on nouns and noun phrases. In our work, we used the feature-based opinion mining model, and we found that in some domains nouns and noun phrases that indicate product features may also imply opinions. In many such cases, these nouns are not subjective but objective. Their involved sentences are also objective sentences and imply positive or negative opinions. Identifying such nouns and noun phrases and their polarities is very challenging but critical for effective opinion mining in these domains. To the best of our knowledge, this problem has not been studied in the literature. This paper proposes a method to deal with the problem. Experimental results based on real-life datasets show promising results.",
"title": ""
}
] |
scidocsrr
|
7edbbbbf7f0eba93ba2c8dbc8920c710
|
That's What Friends Are For: Inferring Location in Online Social Media Platforms Based on Social Relationships
|
[
{
"docid": "a48501fc0bde8917624185981741f0e3",
"text": "We use a sample of publicly available data on Twitter to study networks of mostly weak asymmetric ties. We show that a substantial share of ties lie within the same metropolitan region. As we examine ties between regional clusters, we find that distance, national borders and the difference in languages all affect the pattern of ties. However, Twitter connections show the more substantial correlation with the network of airline flights, highlighting the importance of looking not just at distance but at pre-existing ties between places.",
"title": ""
}
] |
[
{
"docid": "c5ee2a4e38dfa27bc9d77edcd062612f",
"text": "We perform transaction-level analyses of entrusted loans – the largest component of shadow banking in China. There are two types – affiliated and non-affiliated. The latter involve a much higher interest rate than the former and official bank loan rates, and largely flow into the real estate industry. Both involve firms with privileged access to cheap capital to channel funds to less privileged firms and increase when credit is tight. The pricing of entrusted loans, especially that of non-affiliated loans, incorporates fundamental and informational risks. Stock market reactions suggest that both affiliated and non-affiliated loans are fairly-compensated investments.",
"title": ""
},
{
"docid": "c08bbd6acd494d36afc60f9612fee0bb",
"text": "Guided wave imaging has shown great potential for structural health monitoring applications by providing a way to visualize and characterize structural damage. For successful implementation of delay-and-sum and other elliptical imaging algorithms employing guided ultrasonic waves, some degree of mode purity is required because echoes from undesired modes cause imaging artifacts that obscure damage. But it is also desirable to utilize multiple modes because different modes may exhibit increased sensitivity to different types and orientations of defects. The well-known modetuning effect can be employed to use the same PZT transducers for generating and receiving multiple modes by exciting the transducers with narrowband tone bursts at different frequencies. However, this process is inconvenient and timeconsuming, particularly if extensive signal averaging is required to achieve a satisfactory signal-to-noise ratio. In addition, both acquisition time and data storage requirements may be prohibitive if signals from many narrowband tone burst excitations are measured. In this paper, we utilize a chirp excitation to excite PZT transducers over a broad frequency range to acquire multi-modal data with a single transmission, which can significantly reduce both the measurement time and the quantity of data. Each received signal from a chirp excitation is post-processed to obtain multiple signals corresponding to different narrowband frequency ranges. Narrowband signals with the best mode purity and echo shape are selected and then used to generate multiple images of damage in a target structure. The efficacy of the proposed technique is demonstrated experimentally using an aluminum plate instrumented with a spatially distributed array of piezoelectric sensors and with simulated damage.",
"title": ""
},
{
"docid": "5b3ca1cc607d2e8f0394371f30d9e83a",
"text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.",
"title": ""
},
{
"docid": "29e1a872da2b6432b30d4620a9cd692b",
"text": "Fibromyalgia and depression might represent two manifestations of affective spectrum disorder. They share similar pathophysiology and are largely targeted by the same drugs with dual action on serotoninergic and noradrenergic systems. Here, we review evidence for genetic and environmental factors that predispose, precipitate, and perpetuate fibromyalgia and depression and include laboratory findings on the role of depression in fibromyalgia. Further, we comment on several aspects of fibromyalgia which support the development of reactive depression, substantially more so than in other chronic pain syndromes. However, while sharing many features with depression, fibromyalgia is associated with somatic comorbidities and absolutely defined by fluctuating spontaneous widespread pain. Fibromyalgia may, therefore, be more appropriately grouped together with other functional pain disorders, while psychologically distressed subgroups grouped additionally or solely with affective spectrum disorders.",
"title": ""
},
{
"docid": "881de4b66bdba0a45caaa48a13b33388",
"text": "This paper describes a Di e-Hellman based encryption scheme, DHAES. The scheme is as e cient as ElGamal encryption, but has stronger security properties. Furthermore, these security properties are proven to hold under appropriate assumptions on the underlying primitive. We show that DHAES has not only the \\basic\" property of secure encryption (namely privacy under a chosen-plaintext attack) but also achieves privacy under both non-adaptive and adaptive chosenciphertext attacks. (And hence it also achieves non-malleability.) DHAES is built in a generic way from lower-level primitives: a symmetric encryption scheme, a message authentication code, group operations in an arbitrary group, and a cryptographic hash function. In particular, the underlying group may be an elliptic-curve group or the multiplicative group of integers modulo a prime number. The proofs of security are based on appropriate assumptions about the hardness of the Di e-Hellman problem and the assumption that the underlying symmetric primitives are secure. The assumptions are all standard in the sense that no random oracles are involved. We suggest that DHAES provides an attractive starting point for developing public-key encryption standards based on the Di e-Hellman assumption.",
"title": ""
},
{
"docid": "bffa6ec262531b92d86a8f4c7725cb22",
"text": "Vehicular Ad-Hoc Networks (VANETs) enable communication among vehicles as well as between vehicles and roadside infrastructures. Currently available software tools for VANET research still lack the ability to asses the usability of vehicular applications. In this article, we present <u>Tra</u>ffic <u>C</u>ontrol <u>I</u>nterface (TraCI) a technique for interlinking road traffic and network simulators. It permits us to control the behavior of vehicles during simulation runtime, and consequently to better understand the influence of VANET applications on traffic patterns.\n In contrast to the existing approaches, i.e., generating mobility traces that are fed to a network simulator as static input files, the online coupling allows the adaptation of drivers' behavior during simulation runtime. This technique is not limited to a special traffic simulator or to a special network simulator. We introduce a general framework for controlling the mobility which is adaptable towards other research areas.\n We describe the basic concept, design decisions and the message format of this open-source architecture. Additionally, we provide implementations for non-commercial traffic and network simulators namely SUMO and ns2, respectively. This coupling enables for the first time systematic evaluations of VANET applications in realistic settings.",
"title": ""
},
{
"docid": "b96a3320940344dea37f5deccf0e16b2",
"text": "This paper proposes a modulated hysteretic current control (MHCC) technique to improve the transient response of a DC-DC boost converter, which suffers from low bandwidth due to the existence of the right-half-plane (RHP) zero. The MHCC technique can automatically adjust the on-time value to rapidly increase the inductor current, as well as to shorten the transient response time. In addition, based on the characteristic of the RHP zero, the compensation poles and zero are deliberately adjusted to achieve fast transient response in case of load transient condition and adequate phase margin in steady state. Experimental results show the improvement of transient recovery time over 7.2 times in the load transient response compared with the conventional boost converter design when the load current changes from light to heavy or vice versa. The power consumption overhead is merely 1%.",
"title": ""
},
{
"docid": "0d6960b2817f98924f7de3b7d7774912",
"text": "Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.",
"title": ""
},
{
"docid": "3dc4384744f2f85983bc58b0a8a241c6",
"text": "OBJECTIVE\nTo define a map of interradicular spaces where miniscrew can be likely placed at a level covered by attached gingiva, and to assess if a correlation between crowding and availability of space exists.\n\n\nMETHODS\nPanoramic radiographs and digital models of 40 patients were selected according to the inclusion criteria. Interradicular spaces were measured on panoramic radiographs, while tooth size-arch length discrepancy was assessed on digital models. Statistical analysis was performed to evaluate if interradicular spaces are influenced by the presence of crowding.\n\n\nRESULTS\nIn the mandible, the most convenient sites for miniscrew insertion were in the spaces comprised between second molars and first premolars; in the maxilla, between first molars and second premolars as well as between canines and lateral incisors and between the two central incisors. The interradicular spaces between the maxillary canines and lateral incisors, and between mandibular first and second premolars revealed to be influenced by the presence of dental crowding.\n\n\nCONCLUSIONS\nThe average interradicular sites map hereby proposed can be used as a general guide for miniscrew insertion at the very beginning of orthodontic treatment planning. Then, the clinician should consider the amount of crowding: if this is large, the actual interradicular space in some areas might be significantly different from what reported on average. Individualized radiographs for every patient are still recommended.",
"title": ""
},
{
"docid": "4eb37f87312ce521c30858f6a97edd59",
"text": "We propose an automatic framework for quality assessment of a photograph as well as analysis of its aesthetic attributes. In contrast to the previous methods that rely on manually designed features to account for photo aesthetics, our method automatically extracts such features using a pretrained deep convolutional neural network (DCNN). To make the DCNN-extracted features more suited to our target tasks of photo quality assessment and aesthetic attribute analysis, we propose a novel feature encoding scheme, which supports vector machines-driven sparse restricted Boltzmann machines, which enhances sparseness of features and discrimination between target classes. Experimental results show that our method outperforms the current state-of-the-art methods in automatic photo quality assessment, and gives aesthetic attribute ratings that can be used for photo editing. We demonstrate that our feature encoding scheme can also be applied to general object classification task to achieve performance gains.",
"title": ""
},
{
"docid": "ca22cd618b9f118b47f5de69c4cb20fa",
"text": "Endoscopy is used for inspection of the inner surface of organs such as the colon. During endoscopic inspection of the colon or colonoscopy, a tiny video camera generates a video signal, which is displayed on a monitor for interpretation in real-time by physicians. In practice, these images are not typically captured, which may be attributed by lack of fully automated tools for capturing, analysis of important contents, and quick and easy retrieval of these contents. This paper presents the description and evaluation results of our novel software that uses new metrics based on image color and motion over time to automatically record all images of an individual endoscopic procedure into a single digitized video file. The software automatically discards out-patient video frames between different endoscopic procedures. We validated our software system on 2464 h of live video (over 265 million frames) from endoscopy units where colonoscopy and upper endoscopy were performed. Our previous classification method achieved a frame-based sensitivity of 100.00%, but only a specificity of 89.22%. Our new method achieved a frame-based sensitivity and specificity of 99.90% and 99.97%, a significant improvement. Our system is robust for day-to-day use in medical practice.",
"title": ""
},
{
"docid": "e083b5fdf76bab5cdc8fcafc77db23f7",
"text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.",
"title": ""
},
{
"docid": "5a4d42c1f3361bee4f3db29d1ca203f7",
"text": "In this paper we present a joint content selection and compression model for single-document summarization. The model operates over a phrase-based representation of the source document which we obtain by merging information from PCFG parse trees and dependency graphs. Using an integer linear programming formulation, the model learns to select and combine phrases subject to length, coverage and grammar constraints. We evaluate the approach on the task of generating “story highlights”—a small number of brief, self-contained sentences that allow readers to quickly gather information on news stories. Experimental results show that the model’s output is comparable to human-written highlights in terms of both grammaticality and content.",
"title": ""
},
{
"docid": "97957590d7bec130bac3cf0f0e29cf9a",
"text": "Understanding user acceptance of the Internet, especially the intentions to use Internet commerce and mobile commerce, is important in explaining the fact that these commerce have been growing at an exponential rate in recent years. This paper studies factors of new technology to better understand and manage the electronic commerce activities. The theoretical model proposed in this paper is intended to clarify the factors as they are related to the technology acceptance model. More specifically, the relationship among trust and other factors are hypothesized. Using the technology acceptance model, this research reveals the importance of the hedonic factor. The result of this research implies that the ways of stimulating and facilitating customers' participation in mobile commerce should be differentiated from those in Internet commerce",
"title": ""
},
{
"docid": "c6d69e1e382e5ca8d84f0a477f838485",
"text": "Multivariable regression models are powerful tools that are used frequently in studies of clinical outcomes. These models can use a mixture of categorical and continuous variables and can handle partially observed (censored) responses. However, uncritical application of modelling techniques can result in models that poorly fit the dataset at hand, or, even more likely, inaccurately predict outcomes on new subjects. One must know how to measure qualities of a model's fit in order to avoid poorly fitted or overfitted models. Measurement of predictive accuracy can be difficult for survival time data in the presence of censoring. We discuss an easily interpretable index of predictive discrimination as well as methods for assessing calibration of predicted survival probabilities. Both types of predictive accuracy should be unbiasedly validated using bootstrapping or cross-validation, before using predictions in a new data series. We discuss some of the hazards of poorly fitted and overfitted regression models and present one modelling strategy that avoids many of the problems discussed. The methods described are applicable to all regression models, but are particularly needed for binary, ordinal, and time-to-event outcomes. Methods are illustrated with a survival analysis in prostate cancer using Cox regression.",
"title": ""
},
{
"docid": "e4c1342b2405cc7401e1f929c6c41011",
"text": "This paper introduces a protocol for the measuremen t of shoulder movement that uses a motion analysis ba sed technique and the proposed standards of the Interna tional Society of Biomechanics. The protocol demonstrates e ff ctive dynamic tracking of shoulder movements in 3D, inclu ding the movement of the thorax relative to the global coord inate system, the humerus relative to the thorax, the scapula rel ative to the thorax, the sternoclavicular joint, the acromioclavicular joint and the glenohumeral joint. This measurement protocol mu st be further tested for accuracy and repeatability using motion and imaging data from existing methods developed prior to the ISB recommendations. It is proposed to apply the valid ated model to assess pathological shoulder movement and function with the aim of developing a valuable clinical diagnostic tool t o aid surgeons in identifying optimum treatment strategies. Keywords-shoulder complex; measurement technique; motion analysis; ISB recommendations.",
"title": ""
},
{
"docid": "2d2522804c95a4a28f64358151f457ea",
"text": "On the battlefields of the future, multitudes of intelligent things will be communicating, acting, and collaborating with one another and with human warfighters. This will demand major advances in science and technology. With the growth of the Internet of Things (IoT), it's clear that industry and military IoT applications need to operate at a very large scale. Here we learn about the Internet of Battle Things (IoBT) and the unique set of challenges the military faces while under threat from adversaries.",
"title": ""
},
{
"docid": "d7527aeeb5f26f23930b8d674beb0a13",
"text": "A three-part investigation was conducted to explore the meaning of color preferences. Phase 1 used a Q-sort technique to assess intra-individual stability of preferences over 5 wk. Phase 2 used principal components analysis to discern the manner in which preferences were being made. Phase 3 used canonical correlation to evaluate a hypothesized relationship between color preferences and personality, with five scales of the Personality Research Form serving as the criterion measure. Munsell standard papers, a standard light source, and a color vision test were among control devices applied. There were marked differences in stability of color preferences. Sex differences in intra-individual stability were also apparent among the 90 subjects. An interaction of hue and lightness appeared to underlie such judgments when saturation was kept constant. An unexpected breakdown in control pointed toward the possibly powerful effect of surface finish upon color preference. No relationship to five manifest needs were found. It was concluded that the beginning steps had been undertaken toward psychometric development of a reliable technique for the measurement of color preference.",
"title": ""
},
{
"docid": "3b2a3fc20a03d829e4c019fbdbc0f2ae",
"text": "First cars equipped with 24 GHz short range radar (SRR) systems in combination with 77 GHz long range radar (LRR) system enter the market in autumn 2005 enabling new safety and comfort functions. In Europe the 24 GHz ultra wideband (UWB) frequency band is temporally allowed only till end of June 2013 with a limitation of the car pare penetration of 7%. From middle of 2013 new cars have to be equipped with SRR sensors which operate in the frequency band of 79 GHz (77 GHz to 81 GHz). The development of the 79 GHz SRR technology within the German government (BMBF) funded project KOKON is described",
"title": ""
},
{
"docid": "30155768fd0b1b0950510487840defba",
"text": "Most cloud services are built with multi-tenancy which enables data and configuration segregation upon shared infrastructures. In this setting, a tenant temporarily uses a piece of virtually dedicated software, platform, or infrastructure. To fully benefit from the cloud, tenants are seeking to build controlled and secure collaboration with each other. In this paper, we propose a Multi-Tenant Role-Based Access Control (MT-RBAC) model family which aims to provide fine-grained authorization in collaborative cloud environments by building trust relations among tenants. With an established trust relation in MT-RBAC, the trustee can precisely authorize cross-tenant accesses to the truster's resources consistent with constraints over the trust relation and other components designated by the truster. The users in the trustee may restrictively inherit permissions from the truster so that multi-tenant collaboration is securely enabled. Using SUN's XACML library, we prototype MT-RBAC models on a novel Authorization as a Service (AaaS) platform with the Joyent commercial cloud system. The performance and scalability metrics are evaluated with respect to an open source cloud storage system. The results show that our prototype incurs only 0.016 second authorization delay for end users on average and is scalable in cloud environments.",
"title": ""
}
] |
scidocsrr
|
c2b7219ee487c08205e9b6424260e0ec
|
T-Linkage: A Continuous Relaxation of J-Linkage for Multi-model Fitting
|
[
{
"docid": "4eaee8e140ccf216eba2eb60eb41d736",
"text": "In this paper, we study the problem of segmenting tracked feature point trajectories of multiple moving objects in an image sequence. Using the affine camera model, this problem can be cast as the problem of segmenting samples drawn from multiple linear subspaces. In practice, due to limitations of the tracker, occlusions, and the presence of nonrigid objects in the scene, the obtained motion trajectories may contain grossly mistracked features, missing entries, or corrupted entries. In this paper, we develop a robust subspace separation scheme that deals with these practical issues in a unified mathematical framework. Our methods draw strong connections between lossy compression, rank minimization, and sparse representation. We test our methods extensively on the Hopkins155 motion segmentation database and other motion sequences with outliers and missing data. We compare the performance of our methods to state-of-the-art motion segmentation methods based on expectation-maximization and spectral clustering. For data without outliers or missing information, the results of our methods are on par with the state-of-the-art results and, in many cases, exceed them. In addition, our methods give surprisingly good performance in the presence of the three types of pathological trajectories mentioned above. All code and results are publicly available at http://perception.csl.uiuc.edu/coding/motion/.",
"title": ""
}
] |
[
{
"docid": "452285eb334f8b4ecc17592e53d7080e",
"text": "Fathers are taking on more childcare and household responsibilities than they used to and many non-profit and government organizations have pushed for changes in policies to support fathers. Despite this effort, little research has explored how fathers go online related to their roles as fathers. Drawing on an interview study with 37 fathers, we find that they use social media to document and archive fatherhood, learn how to be a father, and access social support. They also go online to support diverse family needs, such as single fathers' use of Reddit instead of Facebook, fathers raised by single mothers' search for role models online, and stay-at-home fathers' use of father blogs. However, fathers are constrained by privacy concerns and perceptions of judgment relating to sharing content online about their children. Drawing on theories of fatherhood, we present theoretical and design ideas for designing online spaces to better support fathers and fatherhood. We conclude with a call for a research agenda to support fathers online.",
"title": ""
},
{
"docid": "1e3e52f584863903625a07aabd1517d3",
"text": "Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2% mean IOU on PASCAL VOC 2012 and 80.3% mean IOU on Cityscapes dataset.",
"title": ""
},
{
"docid": "3fb39e30092858b84291a85a719f97f0",
"text": "A spherical wrist of the serial type is said to be isotropic if it can attain a posture whereby the singular values of its Jacobian matrix are all identical and nonzero. What isotropy brings about is robustness to manufacturing, assembly, and measurement errors, thereby guaranteeing a maximum orientation accuracy. In this paper we investigate the existence of redundant isotropic architectures, which should add to the dexterity of the wrist under design by virtue of its extra degree of freedom. The problem formulation leads to a system of eight quadratic equations with eight unknowns. The Bezout number of this system is thus 2 = 256, its BKK bound being 192. However, the actual number of solutions is shown to be 32. We list all solutions of the foregoing algebraic problem. All these solutions are real, but distinct solutions do not necessarily lead to distinct manipulators. Upon discarding those algebraic solutions that yield no new wrists, we end up with exactly eight distinct architectures, the eight corresponding manipulators being displayed at their isotropic posture.",
"title": ""
},
{
"docid": "4ed1c4f2fb1922acc9ee781eb1f9524e",
"text": "Across HCI and social computing platforms, mobile applications that support citizen science, empowering non-experts to explore, collect, and share data have emerged. While many of these efforts have been successful, it remains difficult to create citizen science applications without extensive programming expertise. To address this concern, we present Sensr, an authoring environment that enables people without programming skills to build mobile data collection and management tools for citizen science. We demonstrate how Sensr allows people without technical skills to create mobile applications. Findings from our case study demonstrate that our system successfully overcomes technical constraints and provides a simple way to create mobile data collection tools.",
"title": ""
},
{
"docid": "c91ce9eb908d5a0fccc980f306ec0931",
"text": "Text Mining has become an important research area. Text Mining is the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources. In this paper, a Survey of Text Mining techniques and applications have been s presented.",
"title": ""
},
{
"docid": "7de050ef4260ad858a620f9aa773b5a7",
"text": "We present DBToaster, a novel query compilation framework for producing high performance compiled query executors that incrementally and continuously answer standing aggregate queries using in-memory views. DBToaster targets applications that require efficient main-memory processing of standing queries (views) fed by high-volume data streams, recursively compiling view maintenance (VM) queries into simple C++ functions for evaluating database updates (deltas). While today’s VM algorithms consider the impact of single deltas on view queries to produce maintenance queries, we recursively consider deltas of maintenance queries and compile to thoroughly transform queries into code. Recursive compilation successively elides certain scans and joins, and eliminates significant query plan interpreter overheads. In this demonstration, we walk through our compilation algorithm, and show the significant performance advantages of our compiled executors over other query processors. We are able to demonstrate 1-3 orders of magnitude improvements in processing times for a financial application and a data warehouse loading application, both implemented across a wide range of database systems, including PostgreSQL, HSQLDB, a commercial DBMS ’A’, the Stanford STREAM engine, and a commercial stream processor ’B’.",
"title": ""
},
{
"docid": "e1efeca0d73be6b09f5cf80437809bdb",
"text": "Deep convolutional neural networks have been shown to be vulnerable to arbitrary geometric transformations. However, there is no systematic method to measure the invariance properties of deep networks to such transformations. We propose ManiFool as a simple yet scalable algorithm to measure the invariance of deep networks. In particular, our algorithm measures the robustness of deep networks to geometric transformations in a worst-case regime as they can be problematic for sensitive applications. Our extensive experimental results show that ManiFool can be used to measure the invariance of fairly complex networks on high dimensional datasets and these values can be used for analyzing the reasons for it. Furthermore, we build on ManiFool to propose a new adversarial training scheme and we show its effectiveness on improving the invariance properties of deep neural networks.1",
"title": ""
},
{
"docid": "bdbd3d65c79e4f22d2e85ac4137ee67a",
"text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",
"title": ""
},
{
"docid": "1eab5897252dae2313210c666c3dce8c",
"text": "Bone marrow angiogenesis plays an important role in the pathogenesis and progression in multiple myeloma. Recent studies have shown that proteasome inhibitor bortezomib (Velcade, formerly PS-341) can overcome conventional drug resistance in vitro and in vivo; however, its antiangiogenic activity in the bone marrow milieu has not yet been defined. In the present study, we examined the effects of bortezomib on the angiogenic phenotype of multiple myeloma patient-derived endothelial cells (MMEC). At clinically achievable concentrations, bortezomib inhibited the proliferation of MMECs and human umbilical vein endothelial cells in a dose-dependent and time-dependent manner. In functional assays of angiogenesis, including chemotaxis, adhesion to fibronectin, capillary formation on Matrigel, and chick embryo chorioallantoic membrane assay, bortezomib induced a dose-dependent inhibition of angiogenesis. Importantly, binding of MM.1S cells to MMECs triggered multiple myeloma cell proliferation, which was also abrogated by bortezomib in a dose-dependent fashion. Bortezomib triggered a dose-dependent inhibition of vascular endothelial growth factor (VEGF) and interleukin-6 (IL-6) secretion by the MMECs, and reverse transcriptase-PCR confirmed drug-related down-regulation of VEGF, IL-6, insulin-like growth factor-I, Angiopoietin 1 (Ang1), and Ang2 transcription. These data, therefore, delineate the mechanisms of the antiangiogenic effects of bortezomib on multiple myeloma cells in the bone marrow milieu.",
"title": ""
},
{
"docid": "374d058c8986dd2ace4d99ecc60cbcc6",
"text": "Subscapularis (SSC) lesions are often underdiagnosed in the clinical routine. This study establishes and compares the diagnostic values of various clinical signs and diagnostic tests for lesions of the SSC tendon. Fifty consecutive patients who were scheduled for an arthroscopic subacromial or rotator cuff procedure were clinically evaluated using the lift-off test (LOT), the internal rotation lag sign (IRLS), the modified belly-press test (BPT) and the belly-off sign (BOS) preoperatively. A modified classification system according to Fox et al. (Type I–IV) was used to classify the SSC lesion during diagnostic arthroscopy. SSC tendon tears occured with a prevalence of 30% (15 of 50). Five type I, six type II, three type IIIa and one type IIIb tears according to the modified classification system were found. Fifteen percent of the SSC tears were not predicted preoperatively by using all of the tests. In six cases (12%), the LOT and the IRLS could not be performed due to a painful restricted range of motion. The modified BPT and the BOS showed the greatest sensitivity (88 and 87%) followed by the IRLS (71%) and the LOT (40%). The BOS had the greatest specificity (91%) followed by the LOT (79%), mod. BPT (68%) and IRLS (45%). The BOS had the highest overall accuracy (90%). With the BOS and the modified BPT in particular, upper SSC lesions (type I and II) could be diagnosed preoperatively. A detailed physical exam using the currently available SSC tests allows diagnosing SSC lesions in the majority of cases preoperatively. However, some tears could not be predicted by preoperative assessment using all the tests.",
"title": ""
},
{
"docid": "1223a45c3a2cebe4ce2e94d4468be946",
"text": "In this paper, we present an overview of energy storage in renewable energy systems. In fact, energy storage is a dominant factor. It can reduce power fluctuations, enhances the system flexibility, and enables the storage and dispatching of the electricity generated by variable renewable energy sources such as wind and solar. Different storage technologies are used in electric power systems. They can be chemical, electrochemical, mechanical, electromagnetic or thermal. Energy storage facility is comprised of a storage medium, a power conversion system and a balance of plant. In this work, an application to photovoltaic and wind electric power systems is made. The results obtained under Matlab/Simulink are presented.",
"title": ""
},
{
"docid": "122ed18a623510052664996c7ef4b4bb",
"text": "A number of sensor applications in recent years collect data which can be directly associated with human interactions. Some examples of such applications include GPS applications on mobile devices, accelerometers, or location sensors designed to track human and vehicular traffic. Such data lends itself to a variety of rich applications in which one can use the sensor data in order to model the underlying relationships and interactions. This requires the development of trajectory mining techniques, which can mine the GPS data for interesting social patterns. It also leads to a number of challenges, since such data may often be private, and it is important to be able to perform the mining process without violating the privacy of the users. Given the open nature of the information contributed by users in social sensing applications, this also leads to issues of trust in making inferences from the underlying data. In this chapter, we provide a broad survey of the work in this important and rapidly emerging field. We also discuss the key problems which arise in the context of this important field and the corresponding",
"title": ""
},
{
"docid": "47a484d75b1635139f899d2e1875d8f4",
"text": "This work presents the concept and methodology as well as the architecture and physical implementation of an integrated node for smart-city applications. The presented integrated node lies on active RFID technology whereas the use case illustrated, with results from a small-scale verification of the presented node, refers to common-type waste-bins. The sensing units deployed for the use case are ultrasonic sensors that provide ranging information which is translated to fill-level estimations; however the use of a versatile active RFID tag within the node is able to afford multiple sensors for a variety of smart-city applications. The most important benefits of the presented node are power minimization, utilization of low-cost components and accurate fill-level estimation with a tiny data-load fingerprint, regarding the specific use case on waste-bins, whereas the node has to be deployed on public means of transportation or similar standard route vehicles within an urban or suburban context.",
"title": ""
},
{
"docid": "81a1504505fa4630af771ccf6ed8404d",
"text": "A method for the simultaneous co-registration and georeferencing of multiple 3D pointclouds and associated intensity information is proposed. It is a generalization of the 3D surface matching problem. The simultaneous co-registration provides for a strict solution to the problem, as opposed to sequential pairwise registration. The problem is formulated as the Least Squares matching of overlapping 3D surfaces. The parameters of 3D transformations of multiple surfaces are simultaneously estimated, using the Generalized GaussMarkoff model, minimizing the sum of squares of the Euclidean distances among the surfaces. An observation equation is written for each surface-to-surface correspondence. Each overlapping surface pair contributes a group of observation equations to the design matrix. The parameters are introduced into the system as stochastic variables, as a second type of (fictitious) observations. This extension allows to control the estimated parameters. Intensity information is introduced into the system in the form of quasisurfaces as the third type of observations. Reference points, defining an external (object) coordinate system, which are imaged in additional intensity images, or can be located in the pointcloud, serve as the fourth type of observations. They transform the whole block of “models” to a unique reference system. Furthermore, the given coordinate values of the control points are treated as observations. This gives the fifth type of observations. The total system is solved by applying the Least Squares technique, provided that sufficiently good initial values for the transformation parameters are given. This method can be applied to data sets generated from aerial as well as terrestrial laser scanning or other pointcloud generating methods. * Corresponding author. www.photogrammetry.ethz.ch",
"title": ""
},
{
"docid": "12818095167dbf85d5d717121f00f533",
"text": "Sarmento, H, Figueiredo, A, Lago-Peñas, C, Milanovic, Z, Barbosa, A, Tadeu, P, and Bradley, PS. Influence of tactical and situational variables on offensive sequences during elite football matches. J Strength Cond Res 32(8): 2331-2339, 2018-This study examined the influence of tactical and situational variables on offensive sequences during elite football matches. A sample of 68 games and 1,694 offensive sequences from the Spanish La Liga, Italian Serie A, German Bundesliga, English Premier League, and Champions League were analyzed using χ and logistic regression analyses. Results revealed that counterattacks (odds ratio [OR] = 1.44; 95% confidence interval [CI]: 1.13-1.83; p < 0.01) and fast attacks (OR = 1.43; 95% CI: 1.11-1.85; p < 0.01) increased the success of an offensive sequence by 40% compared with positional attacks. The chance of an offensive sequence ending effectively in games from the Spanish, Italian, and English Leagues were higher than that in the Champions League. Offensive sequences that started in the preoffensive or offensive zones were more successful than those started in the defensive zones. An increase of 1 second in the offensive sequence duration and an extra pass resulted in a decrease of 2% (OR = 0.98; 95% CI: 0.98-0.99; p < 0.001) and 7% (OR = 0.93; 95% CI: 0.91-0.96; p < 0.001), respectively, in the probability of its success. These findings could assist coaches in designing specific training situations that improve the effectiveness of the offensive process.",
"title": ""
},
{
"docid": "70ef6e69e811e3c66f1e73b3ad8c97b3",
"text": "The turnstile junction exhibits very low cross-polarization leakage and is suitable for low-noise millimeter-wave receivers. For use in a cryogenic receiver, it is best if the orthomode transducer (OMT) is implemented in waveguide, contains no additional assembly features, and may be directly machined. However, machined OMTs are prone to sharp signal drop-outs that are costly to overall performance since they show up directly as spikes in receiver noise. We explore the various factors contributing to this degradation and discuss how the current design mitigates each cause. Final performance is demonstrated at cryogenic temperatures.",
"title": ""
},
{
"docid": "3b6e3884a9d3b09d221d06f3dea20683",
"text": "Convolutional neural networks (CNNs) work well on large datasets. But labelled data is hard to collect, and in some applications larger amounts of data are not available. The problem then is how to use CNNs with small data – as CNNs overfit quickly. We present an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches. This is by placing a probability distribution over the CNN’s kernels. We approximate our model’s intractable posterior with Bernoulli variational distributions, requiring no additional model parameters. On the theoretical side, we cast dropout network training as approximate inference in Bayesian neural networks. This allows us to implement our model using existing tools in deep learning with no increase in time complexity, while highlighting a negative result in the field. We show a considerable improvement in classification accuracy compared to standard techniques and improve on published state-of-theart results for CIFAR-10.",
"title": ""
},
{
"docid": "6d3e19c44f7af5023ef991b722b078c5",
"text": "Volatile substances are commonly misused with easy-to-obtain commercial products, such as glue, shoe polish, nail polish remover, butane lighter fluid, gasoline and computer duster spray. This report describes a case of sudden death of a 29-year-old woman after presumably inhaling gas cartridge butane from a plastic bag. Autopsy, pathological and toxicological analyses were performed in order to determine the cause of death. Pulmonary edema was observed pathologically, and the toxicological study revealed 2.1μL/mL of butane from the blood. The causes of death from inhalation of volatile substances have been explained by four mechanisms; cardiac arrhythmia, anoxia, respiratory depression, and vagal inhibition. In this case, the cause of death was determined to be asphyxia from anoxia. Additionally, we have gathered fatal butane inhalation cases with quantitative analyses of butane concentrations, and reviewed other reports describing volatile substance abuse worldwide.",
"title": ""
},
{
"docid": "c02d98d1cbda4447498c7d3e1993bae2",
"text": "We present MILABOT: a deep reinforcement learning chatbot developed by the Montreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing with humans on popular small talk topics through both speech and text. The system consists of an ensemble of natural language generation and retrieval models, including neural network and template-based models. By applying reinforcement learning to crowdsourced data and real-world user interactions, the system has been trained to select an appropriate response from the models in its ensemble. The system has been evaluated through A/B testing with realworld users, where it performed significantly better than other systems. The results highlight the potential of coupling ensemble systems with deep reinforcement learning as a fruitful path for developing real-world, open-domain conversational agents.",
"title": ""
},
{
"docid": "46ac5e994ca0bf0c3ea5dd110810b682",
"text": "The Geosciences and Geography are not just yet another application area for semantic technologies. The vast heterogeneity of the involved disciplines ranging from the natural sciences to the social sciences introduces new challenges in terms of interoperability. Moreover, the inherent spatial and temporal information components also require distinct semantic approaches. For these reasons, geospatial semantics, geo-ontologies, and semantic interoperability have been active research areas over the last 20 years. The geospatial semantics community has been among the early adopters of the Semantic Web, contributing methods, ontologies, use cases, and datasets. Today, geographic information is a crucial part of many central hubs on the Linked Data Web. In this editorial, we outline the research field of geospatial semantics, highlight major research directions and trends, and glance at future challenges. We hope that this text will be valuable for geoscientists interested in semantics research as well as knowledge engineers interested in spatiotemporal data. Introduction and Motivation While the Web has changed with the advent of the Social Web from mostly authoritative content towards increasing amounts of user generated information, it is essentially still about linked documents. These documents provide structure and context for the described data and easy their interpretation. In contrast, the evolving Data Web is about linking data, not documents. Such datasets are not bound to a specific document but can be easily combined and used outside of their original creation context. With a growth rate of millions of new facts encoded as RDF-triples per month, the Linked Data cloud allows users to answer complex queries spanning multiple, heterogeneous data sources from different scientific domains. However, this uncoupling of data from its creation context makes the interpretation of data challenging. Thus, research on semantic interoperability and ontologies is crucial to ensure consistency and meaningful results. Space and time are fundamental ordering principles to structure such data and provide an implicit context for their interpretation. Hence, it is not surprising that many linked datasets either contain spatiotemporal identifiers themselves or link out to such datasets, making them central hubs of the Linked Data cloud. Prominent examples include Geonames.org as well as the Linked Geo Data project, which provides a RDF serialization of Points Of Interest from Open Street Map [103]. Besides such Voluntary Geographic Information (VGI), governments 1570-0844/12/$27.50 c © 2012 – IOS Press and the authors. All rights reserved",
"title": ""
}
] |
scidocsrr
|
e172eed8c141dcdc945a39004f82a07f
|
Oracle Quantum Computing
|
[
{
"docid": "a49b2152082aa23f9b90d298064b9733",
"text": "The number of steps required to compute a function depends, in general, on the type of computer that is used, on the choice of computer program, and on the input-output code. Nevertheless, the results obtained in this paper are so general as to be nearly independent of these considerations.\nA function is exhibited that requires an enormous number of steps to be computed, yet has a “nearly quickest” program: Any other program for this function, no matter how ingeniously designed it may be, takes practically as many steps as this nearly quickest program.\nA different function is exhibited with the property that no matter how fast a program may be for computing this function another program exists for computing the function very much faster.",
"title": ""
}
] |
[
{
"docid": "c35b5da1da795857baf4ee1ce7dbfac5",
"text": "The art of finding software vulnerabilities has been covered extensively in the literature and there is a huge body of work on this topic. In contrast, the intentional insertion of exploitable, security-critical bugs has received little (public) attention yet. Wanting more bugs seems to be counterproductive at first sight, but the comprehensive evaluation of bug-finding techniques suffers from a lack of ground truth and the scarcity of bugs.\n In this paper, we propose EvilCoder, a system to automatically find potentially vulnerable source code locations and modify the source code to be actually vulnerable. More specifically, we leverage automated program analysis techniques to find sensitive sinks which match typical bug patterns (e.g., a sensitive API function with a preceding sanity check), and try to find data-flow connections to user-controlled sources. We then transform the source code such that exploitation becomes possible, for example by removing or modifying input sanitization or other types of security checks. Our tool is designed to randomly pick vulnerable locations and possible modifications, such that it can generate numerous different vulnerabilities on the same software corpus. We evaluated our tool on several open-source projects such as for example libpng and vsftpd, where we found between 22 and 158 unique connected source-sink pairs per project. This translates to hundreds of potentially vulnerable data-flow paths and hundreds of bugs we can insert. We hope to support future bug-finding techniques by supplying freshly generated, bug-ridden test corpora so that such techniques can (finally) be evaluated and compared in a comprehensive and statistically meaningful way.",
"title": ""
},
{
"docid": "d9b0af2dccf5615f829d22b4a1d99ba1",
"text": "It is a premise of this research that prevention of near-term terrorist attacks requires an understanding of current terrorist organizations to include their composition, the actors involved, and how they operate to achieve their objectives. To aid in this understanding, operations research, sociological, and behavioral theory relevant to the study of social networks are applied, thereby providing theoretical foundations for new and useful methodologies to analyze non-cooperative organizations. Such organizations are defined as those trying to hide their structures or are unwilling to provide information regarding their operations; examples include criminal networks, secret societies, and, most importantly, clandestine terrorist organizations. Techniques leveraging information regarding multiple dimensions of interpersonal relationships, inferring from them the strengths of interpersonal ties, are explored. Hence, a layered network construct is offered that provides new analytic opportunities and insights generally unaccounted for in traditional social network analysis. These offer decision makers improved courses of action designed to impute influence upon an adversarial network, thereby achieving a desired influence, perception, or outcome to one or more actors within the target network. In addition, this knowledge can also be used to identify key individuals, relationships, and organizational practices. Subsequently, such analysis may lead to the identification of weaknesses that can be exploited in an endeavor to either eliminate the network as a whole, cause it to become operationally ineffective, or influence it to directly or indirectly support National Security Strategy. In today’s world, proficiency in this aspect of warfare is a necessary condition to ensure United States National Security, as well as to promote and maintain global stability. Quantitative methods serving as the basis for, and discriminator between, courses of action seeking a path towards peace are a principal output of this research.",
"title": ""
},
{
"docid": "33bb513236db3f2f1f36a836191204bb",
"text": "In this paper, we present Vertexica, a graph analytics tools on top of a relational database, which is user friendly and yet highly ecient. Instead of constraining programmers to SQL, Vertexica offers a popular vertex-centric query interface, which is more natural for analysts to express many graph queries. e programmers simply provide their vertex-compute functions and Vertexica takes care of eciently executing them in the standard SQL engine. e advantage of using Vertexica is its ability to leverage the relational features and enable much more sophisticated graph analysis. ese include expressing graph algorithms which are dicult in vertexcentric but straightforward in SQL and the ability to compose endto-end data processing pipelines, including preand postprocessing of graphs as well as combining multiple algorithms for deeper insights. Vertexica has a graphical user interface and we outline several demonstration scenarios including, interactive graph analysis, complex graph analysis, and continuous and time series analysis.",
"title": ""
},
{
"docid": "54df0e1a435d673053f9264a4c58e602",
"text": "Next location prediction anticipates a person’s movement based on the history of previous sojourns. It is useful for proactive actions taken to assist the person in an ubiquitous environment. This paper evaluates next location prediction methods: dynamic Bayesian network, multi-layer perceptron, Elman net, Markov predictor, and state predictor. For the Markov and state predictor we use additionally an optimization, the confidence counter. The criterions for the comparison are the prediction accuracy, the quantity of useful predictions, the stability, the learning, the relearning, the memory and computing costs, the modelling costs, the expandability, and the ability to predict the time of entering the next location. For evaluation we use the same benchmarks containing movement sequences of real persons within an office building.",
"title": ""
},
{
"docid": "4a1de61e9e74aa43a4e0bf195250ef72",
"text": "We present in this paper a system for converting PDF legacy documents into structured XML format. This conversion system first extracts the different streams contained in PDF files (text, bitmap and vectorial images) and then applies different components in order to express in XML the logically structured documents. Some of these components are traditional in Document Analysis, other more specific to PDF. We also present a graphical user interface in order to check, correct and validate the analysis of the components. We eventually report on two real user cases where this system was applied on.",
"title": ""
},
{
"docid": "49e7ac480b41045f69af59635476b28a",
"text": "Crowd sensing harnesses the power of the crowd by mobilizing a large number of users carrying various mobile and networked devices to collect data with the intrinsic multi-modal and large-volume features. With traditional methods, it is highly challenging to analyze the vast data volume generated by crowd sensing. In the era of big data, although several individual-oriented approaches are proposed to analyze human behavior based on big data, the common features of individual activity have not been fully investigated. In this article, we design a novel community- centric framework for community activity prediction based on big data analysis. Specifically, we propose an approach to extract community activity patterns by analyzing the big data collected from both the physical world and virtual social space. The proposed approach consists of community detection based on singular value decomposition and clustering, and community activity modeling based on tensors. The proposed approach is evaluated with a case study where a real dataset collected over a 15-month period is analyzed.",
"title": ""
},
{
"docid": "1c74034a07f6312001310b86e5b5162c",
"text": "We describe a general offset-canceling architecture for analog multiplication using chopper stabilization. Chopping is used to modulate the offset away from the output signal where it can be easily filtered out, providing continuous offset reduction which is insensitive to drift. Both square wave chopping and chopping with orthogonal spreading codes are tested and shown to reduce the offset down to the microvolt level. In addition, we apply the nested chopping technique to an analog multiplier which employs two levels of chopping to reduce the offset even further. We discuss the limits on the performance of the various chopping methods in detail, and present a detailed analysis of the residual offset due to charge injection spikes. An illustrative CMOS prototype in a 0.18 mum process is presented which achieves a worst-case offset of 1.5 muV. This is the lowest measured offset reported in the DC analog multiplier literature by a margin of two orders of magnitude. The prototype multiplier is also tested with AC inputs as a squarer, variable gain amplifier, and direct-conversion mixer, demonstrating that chopper stabilization is effective for both DC and AC multiplication. The AC measurements show that chopping removes not only offset, but also 1/f noise and second-order harmonic distortion.",
"title": ""
},
{
"docid": "7e557091d8cfe6209b1eda3b664ab551",
"text": "With the increasing penetration of mobile phones, problematic use of mobile phone (PUMP) deserves attention. In this study, using a path model we examined the relationship between depression and PUMP, with motivations as mediators. Findings suggest that depressed people may rely on mobile phone to alleviate their negative feelings and spend more time on communication activities via mobile phone, which in turn can deteriorate into PUMP. However, face-to-face communication with others played a moderating role, weakening the link between use of mobile phone for communication activities and dete-",
"title": ""
},
{
"docid": "cebe11867c14e02454dfe55c8e33f932",
"text": "Johan Farkas, Jannick Schou & Christina Neumayer Abstract This research analyses cloaked Facebook pages that are created to spread political propaganda by cloaking a user profile and imitating the identity of a political opponent in order to spark hateful and aggressive reactions. This inquiry is pursued through a multi-sited ethnographic case study of Danish Facebook pages disguised as radical Islamist pages, which provoked racist and anti-Muslim reactions as well as negative sentiments toward refugees, and immigrants in Denmark in general. Drawing on Jessie Daniels’ critical insights into cloaked websites, this research furthermore analyses the epistemological, methodological, and conceptual challenges of online propaganda. It enhances our understanding of disinformation and propaganda in an increasingly interactive social media environment and contributes to a critical inquiry into social media and subversive politics.",
"title": ""
},
{
"docid": "605125a6801bd9aa190f177ee4f0cb1f",
"text": "One of the challenges in bio-computing is to enable the efficient use and inter-operation of a wide variety of rapidly-evolving computational methods to simulate, analyze, and understand the complex properties and interactions of molecular systems. In our laboratory we investigates several areas, including protein-ligand docking, protein-protein docking, and complex molecular assemblies. Over the years we have developed a number of computational tools such as molecular surfaces, phenomenological potentials, various docking and visualization programs which we use in conjunction with programs developed by others. The number of programs available to compute molecular properties and/or simulate molecular interactions (e.g., molecular dynamics, conformational analysis, quantum mechanics, distance geometry, docking methods, ab-initio methods) is large and growing rapidly. Moreover, these programs come in many flavors and variations, using different force fields, search techniques, algorithmic details (e.g., continuous space vs. discrete, Cartesian vs. torsional). Each variation presents its own characteristic set of advantages and limitations. These programs also tend to evolve rapidly and are usually not written as components, making it hard to get them to work together.",
"title": ""
},
{
"docid": "4b7a885d463022a1792d99ff0c76be72",
"text": "Emerging applications in sensor systems and network-wide IP traffic analysis present many technical challenges. They need distributed monitoring and continuous tracking of events. They have severe resource constraints not only at each site in terms of per-update processing time and archival space for highspeed streams of observations, but also crucially, communication constraints for collaborating on the monitoring task. These elements have been addressed in a series of recent works. A fundamental issue that arises is that one cannot make the \"uniqueness\" assumption on observed events which is present in previous works, since widescale monitoring invariably encounters the same events at different points. For example, within the network of an Internet Service Provider packets of the same flow will be observed in different routers; similarly, the same individual will be observed by multiple mobile sensors in monitoring wild animals. Aggregates of interest on such distributed environments must be resilient to duplicate observations. We study such duplicate-resilient aggregates that measure the extent of the duplication―how many unique observations are there, how many observations are unique―as well as standard holistic aggregates such as quantiles and heavy hitters over the unique items. We present accuracy guaranteed, highly communication-efficient algorithms for these aggregates that work within the time and space constraints of high speed streams. We also present results of a detailed experimental study on both real-life and synthetic data.",
"title": ""
},
{
"docid": "bf19f897047ba130afd7742a9847e08c",
"text": "Neural Machine Translation (NMT) has been shown to be more effective in translation tasks compared to the Phrase-Based Statistical Machine Translation (PBMT). However, NMT systems are limited in translating low-resource languages (LRL), due to the fact that neural methods require a large amount of parallel data to learn effective mappings between languages. In this work we show how so-called multilingual NMT can help to tackle the challenges associated with LRL translation. Multilingual NMT forces words and subwords representation in a shared semantic space across multiple languages. This allows the model to utilize a positive parameter transfer between different languages, without changing the standard attentionbased encoder-decoder architecture and training modality. We run preliminary experiments with three languages (English, Italian, Romanian) covering six translation directions and show that for all available directions the multilingual approach, i.e. just one system covering all directions is comparable or even outperforms the single bilingual systems. Finally, our approach achieve competitive results also for language pairs not seen at training time using a pivoting (x-step) translation. Italiano. La traduzione automatica con reti neurali (neural machine translation, NMT) ha dimostrato di essere più efficace in molti compiti di traduzione rispetto a quella basata su frasi (phrase-based machine translation, PBMT). Tuttavia, i sistemi NMT sono limitati nel tradurre lingue con basse risorse (LRL). Questo è dovuto al fatto che i metodi di deep learning richiedono grandi quantit di dati per imparare una mappa efficace tra le due lingue. In questo lavoro mostriamo come un modello NMT multilingua può aiutare ad affrontare i problemi legati alla traduzione di LRL. La NMT multilingua costringe la rappresentrazione delle parole e dei segmenti di parole in uno spazio semantico condiviso tra multiple lingue. Questo consente al modello di usare un trasferimento di parametri positivo tra le lingue coinvolte, senza cambiare l’architettura NMT encoder-decoder basata sull’attention e il modo di addestramento. Abbiamo eseguito esperimenti preliminari con tre lingue (inglese, italiano e rumeno), coprendo sei direzioni di traduzione e mostriamo che per tutte le direzioni disponibili l’approccio multilingua, cioè un solo sistema che copre tutte le direzioni è confrontabile o persino migliore dei singolo sistemi bilingue. Inoltre, il nostro approccio ottiene risultati competitivi anche per coppie di lingue non viste durante il trainig, facendo uso di traduzioni con pivot.",
"title": ""
},
{
"docid": "d52ba071c7790478235e364fc1cfab83",
"text": "We study parameter inference in large-scale latent variable models. We first propose a unified treatment of online inference for latent variable models from a non-canonical exponential family, and draw explicit links between several previously proposed frequentist or Bayesian methods. We then propose a novel inference method for the frequentist estimation of parameters, that adapts MCMC methods to online inference of latent variable models with the proper use of local Gibbs sampling. Then, for latent Dirichlet allocation,we provide an extensive set of experiments and comparisons with existing work, where our new approach outperforms all previously proposed methods. This work is currently under review for JMLR [1] (submitted on July, 27 2016).",
"title": ""
},
{
"docid": "ac0b4babbe59570c801ae3efbb6dcbe3",
"text": "In recent years, RNA has attracted widespread attention as a unique biomaterial with distinct biophysical properties for designing sophisticated architectures in the nanometer scale. RNA is much more versatile in structure and function with higher thermodynamic stability compared to its nucleic acid counterpart DNA. Larger RNA molecules can be viewed as a modular structure built from a combination of many 'Lego' building blocks connected via different linker sequences. By exploiting the diversity of RNA motifs and flexibility of structure, varieties of RNA architectures can be fabricated with precise control of shape, size, and stoichiometry. Many structural motifs have been discovered and characterized over the years and the crystal structures of many of these motifs are available for nanoparticle construction. For example, using the flexibility and versatility of RNA structure, RNA triangles, squares, pentagons, and hexagons can be constructed from phi29 pRNA three-way-junction (3WJ) building block. This review will focus on 2D RNA triangles, squares, and hexamers; 3D and 4D structures built from basic RNA building blocks; and their prospective applications in vivo as imaging or therapeutic agents via specific delivery and targeting. Methods for intracellular cloning and expression of RNA molecules and the in vivo assembly of RNA nanoparticles will also be reviewed. WIREs RNA 2018, 9:e1452. doi: 10.1002/wrna.1452 This article is categorized under: RNA Methods > RNA Nanotechnology RNA Structure and Dynamics > RNA Structure, Dynamics and Chemistry RNA in Disease and Development > RNA in Disease Regulatory RNAs/RNAi/Riboswitches > Regulatory RNAs.",
"title": ""
},
{
"docid": "d083e8ebddf43bcd8f1efd05aa708658",
"text": "Even a casual reading of the extensive literature on student development in higher education can create confusion and perplexity. One finds not only that the problems being studied are highly diverse but also that investigators who claim to be studying the same problem frequently do not look at the same variables or employ the same methodologies. And even when they are investigating the same variables, different investigators may use completely different terms to describe and discuss these variables. My own interest in articulating a theory of student development is partly practical—I would like to bring some order into the chaos of the literature—and partly self-protective. I and increasingly bewildered by the muddle of f indings that have emerged from my own research in student development, research that I have been engaged in for more than 20 years. The theory of student involvement that I describe in this article appeals to me for several reasons. First, it is simple: I have not needed to draw a maze consisting of dozens of boxes interconnected by two-headed arrows to explain the basic elements of the theory to others. Second, the theory can explain most of the empirical knowledge about environmental influences on student development that researchers have gained over the years. Third, it is capable of embracing principles from such widely divergent sources as psychoanalysis and classical learning theory. Finally, this theory of student involvement can be used both by researchers to guide their investigation of student development—and by college administrators and",
"title": ""
},
{
"docid": "91c4a5de9dd41c5dec87a1475e8218fd",
"text": "We present a general theory and corresponding declarative model for the embodied grounding and natural language based analytical summarisation of dynamic visuo-spatial imagery. The declarative model —ecompassing spatiolinguistic abstractions, image schemas, and a spatio-temporal feature based language generator— is modularly implemented within Constraint Logic Programming (CLP). The implemented model is such that primitives of the theory, e.g., pertaining to space and motion, image schemata, are available as first-class objects with deep semantics suited for inference and query. We demonstrate the model with select examples broadly motivated by areas such as film, design, geography, smart environments where analytical natural language based externalisations of the moving image are central from the viewpoint of human interaction, evidence-based qualitative analysis, and sensemaking.",
"title": ""
},
{
"docid": "9c118c312d8118e9a71fa0d17fa42b51",
"text": "The Standards of Care (SOC) for the Health of Transsexual, Transgender, and Gender Nonconforming People is a publication of the World Professional Association for Transgender Health (WPATH). The overall goal of the SOC is to provide clinical guidance for health professionals to assist transsexual, transgender, and gender nonconforming people with safe and effective pathways to achieving lasting personal comfort with their gendered selves, in order to maximize their overall health, psychological well-being, and self-fulfillment. This assistance may include primary care, gynecologic and urologic care, reproductive options, voice and communication therapy, mental health services (e.g., assessment, counseling, psychotherapy), and hormonal and surgical treatments. The SOC are based on the best available science and expert professional consensus. Because most of the research and experience in this field comes from a North American and Western European perspective, adaptations of the SOC to other parts of the world are necessary. The SOC articulate standards of care while acknowledging the role of making informed choices and the value of harm reduction approaches. In addition, this version of the SOC recognizes that treatment for gender dysphoria i.e., discomfort or distress that is caused by a discrepancy between persons gender identity and that persons sex assigned at birth (and the associated gender role and/or primary and secondary sex characteristics) has become more individualized. Some individuals who present for care will have made significant self-directed progress towards gender role changes or other resolutions regarding their gender identity or gender dysphoria. Other individuals will require more intensive services. Health professionals can use the SOC to help patients consider the full range of health services open to them, in accordance with their clinical needs and goals for gender expression.",
"title": ""
},
{
"docid": "d6681899902b990f82b775927cde9277",
"text": "Facial expression provides an important behavioral measure for studies of emotion, cognitive processes, and social interaction. Facial expression recognition has recently become a promising research area. Its applications include human-computer interfaces, human emotion analysis, and medical care and cure. In this paper, we investigate various feature representation and expression classification schemes to recognize seven different facial expressions, such as happy, neutral, angry, disgust, sad, fear and surprise, in the JAFFE database. Experimental results show that the method of combining 2D-LDA (Linear Discriminant Analysis) and SVM (Support Vector Machine) outperforms others. The recognition rate of this method is 95.71% by using leave-one-out strategy and 94.13% by using cross-validation strategy. It takes only 0.0357 second to process one image of size 256 × 256.",
"title": ""
},
{
"docid": "b75c7e5c8badea76dc21c05901e32423",
"text": "The need for autonomous navigation has increased in recent years due to the adoption of unmanned aerial vehicles (UAVs) and Micro UAVs (MAVs) for task, such as search and rescue, terrain mapping, and other missions, albeit by different means. MAVs have been less successful at fulfilling these missions as they are unable to carry complex sensors and camera systems for computer vision which larger UAVs routinely use. Monocular vision has been used previously to provide vision capabilities for MAVs. Although, monocular vision has had less success at obstacle detection and avoidance compared to stereo vision and is more computationally expensive. The more expensive computations have there for posed a problem in the past for on board closed MAV systems for autonomous navigation using monocular vision. However, with embedded GPUs recently gaining traction for small yet powerful parallel computations in small form factors show promise for fully closed MAV systems. This paper discusses the future of autonomous navigation with an embedded GPU, NVIDIA’s Jetson TX1 board, and AR.Drone 2.0 MAV drone using a novel obstacle detection algorithm implementing goodFeaturesToTrack, Lucas-Kanade Optical Flow, image segmentation, and size expansion.",
"title": ""
},
{
"docid": "5e194b5c1b14b423e955880de810eaba",
"text": "A human body detection algorithm based on the combination of moving information with shape information is proposed in the paper. Firstly, Eigen-object computed from three frames in the initial video sequences is used to detect the moving object. Secondly, the shape information of human body is used to classify human and other object. Furthermore, the occlusion between two objects during a short time is processed by using continues multiple frames. The advantages of the algorithm are accurately moving object detection, and the detection result doesn't effect by body pose. Moreover, as the shadow of moving object has been eliminated.",
"title": ""
}
] |
scidocsrr
|
191acb49442f6505c839606b130fa5ff
|
A simulation as a service cloud middleware
|
[
{
"docid": "e740e5ff2989ce414836c422c45570a9",
"text": "Many organizations desired to operate their businesses, works and services in a mobile (i.e. just in time and anywhere), dynamic, and knowledge-oriented fashion. Activities like e-learning, environmental learning, remote inspection, health-care, home security and safety mechanisms etc. requires a special infrastructure that might provide continuous, secured, reliable and mobile data with proper information/ knowledge management system in context to their confined environment and its users. An indefinite number of sensor networks for numerous healthcare applications has been designed and implemented but they all lacking extensibility, fault-tolerance, mobility, reliability and openness. Thus, an open, flexible and rearrangeable infrastructure is proposed for healthcare monitoring applications. Where physical sensors are virtualized as virtual sensors on cloud computing by this infrastructure and virtual sensors are provisioned automatically to end users whenever they required. In this paper we reviewed some approaches to hasten the service creations in field of healthcare and other applications with Cloud-Sensor architecture. This architecture provides services to end users without being worried about its implementation details. The architecture allows the service requesters to use the virtual sensors by themselves or they may create other new services by extending virtual sensors.",
"title": ""
},
{
"docid": "9380bb09ffc970499931f063008c935f",
"text": "Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7f06370a81e7749970cd0359c5b5f993",
"text": "The use of virtualization technologies in high performance computing (HPC) environments has traditionally been avoided due to their inherent performance overhead. However, with the rise of container-based virtualization implementations, such as Linux VServer, OpenVZ and Linux Containers (LXC), it is possible to obtain a very low overhead leading to near-native performance. In this work, we conducted a number of experiments in order to perform an in-depth performance evaluation of container-based virtualization for HPC. We also evaluated the trade-off between performance and isolation in container-based virtualization systems and compared them with Xen, which is a representative of the traditional hypervisor-based virtualization systems used today.",
"title": ""
}
] |
[
{
"docid": "234fcc911f6d94b6bbb0af237ad5f34f",
"text": "Contamination of samples with DNA is still a major problem in microbiology laboratories, despite the wide acceptance of PCR and other amplification techniques for the detection of frequently low amounts of target DNA. This review focuses on the implications of contamination in the diagnosis and research of infectious diseases, possible sources of contaminants, strategies for prevention and destruction, and quality control. Contamination of samples in diagnostic PCR can have far-reaching consequences for patients, as illustrated by several examples in this review. Furthermore, it appears that the (sometimes very unexpected) sources of contaminants are diverse (including water, reagents, disposables, sample carry over, and amplicon), and contaminants can also be introduced by unrelated activities in neighboring laboratories. Therefore, lack of communication between researchers using the same laboratory space can be considered a risk factor. Only a very limited number of multicenter quality control studies have been published so far, but these showed false-positive rates of 9–57%. The overall conclusion is that although nucleic acid amplification assays are basically useful both in research and in the clinic, their accuracy depends on awareness of risk factors and the proper use of procedures for the prevention of nucleic acid contamination. The discussion of prevention and destruction strategies included in this review may serve as a guide to help improve laboratory practices and reduce the number of false-positive amplification results.",
"title": ""
},
{
"docid": "cff3b4f6db26e66893a9db95fb068ef1",
"text": "In this paper, we consider the task of text categorization as a graph classification problem. By representing textual documents as graph-of-words instead of historical n-gram bag-of-words, we extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining. Moreover, by capitalizing on the concept of k-core, we reduce the graph representation to its densest part – its main core – speeding up the feature extraction step for little to no cost in prediction performances. Experiments on four standard text classification datasets show statistically significant higher accuracy and macro-averaged F1-score compared to baseline approaches.",
"title": ""
},
{
"docid": "417100b3384ec637b47846134bc6d1fd",
"text": "The electronic way of learning and communicating with students offers a lot of advantages that can be achieved through different solutions. Among them, the most popular approach is the use of a learning management system. Teachers and students do not have the possibility to use all of the available learning system tools and modules. Even for modules that are used it is necessary to find the most effective method of approach for any given situation. Therefore, in this paper we make a usability evaluation of standard modules in Moodle, one of the leading open source learning management systems. With this research, we obtain significant results and informationpsilas for administrators, teachers and students on how to improve effective usage of this system.",
"title": ""
},
{
"docid": "48317f6959b4a681e0ff001c7ce3e7ee",
"text": "We introduce the challenge of using machine learning effectively in space applications and motivate the domain for future researchers. Machine learning can be used to enable greater autonomy to improve the duration, reliability, cost-effectiveness, and science return of space missions. In addition to the challenges provided by the nature of space itself, the requirements of a space mission severely limit the use of many current machine learning approaches, and we encourage researchers to explore new ways to address these challenges.",
"title": ""
},
{
"docid": "6746032bbd302a8c873ac437fc79b3fe",
"text": "This article examines the development of profitor revenue-sharing contracts in the motion picture industry. Contrary to much popular belief, such contracts have been in use since the start of the studio era. However, early contracts differed from those seen today. The evolution of the current contract is traced, and evidence regarding the increased use of sharing contracts after 1948 is examined. I examine competing theories of the economic function served by these contracts. I suggest that it is unlikely that these contracts are the result of a standard principal-agent problem.",
"title": ""
},
{
"docid": "defb837e866948e5e092ab64476d33b5",
"text": "Recent multicoil polarised pads called Double D pads (DDP) and Bipolar Pads (BPP) show excellent promise when used in lumped charging due to having single sided fields and high native Q factors. However, improvements to field leakage are desired to enable higher power transfer while keeping the leakage flux within ICNIRP levels. This paper proposes a method to reduce the leakage flux which a lumped inductive power transfer (IPT) system exhibits by modifying the ferrite structure of its pads. The DDP and BPP pads ferrite structures are both modified by extending them past the ends of the coils in each pad with the intention of attracting only magnetic flux generated by the primary pad not coupled onto the secondary pad. Simulated improved ferrite structures are validated through practical measurements.",
"title": ""
},
{
"docid": "4b057d86825e346291d675e0c1285fad",
"text": "We describe theclipmap, a dynamic texture representation that efficiently caches textures of arbitrarily large size in a finite amount of physical memory for rendering at real-time rates. Further, we describe a software system for managing clipmaps that supports integration into demanding real-time applications. We show the scale and robustness of this integrated hardware/software architecture by reviewing an application virtualizing a 170 gigabyte texture at 60 Hertz. Finally, we suggest ways that other rendering systems may exploit the concepts underlying clipmaps to solve related problems. CR",
"title": ""
},
{
"docid": "6be97ac80738519792c02b033563efa7",
"text": "Title of Document: SPIN: LEXICAL SEMANTICS, TRANSITIVITY, AND THE IDENTIFICATION OF IMPLICIT SENTIMENT Stephan Charles Greene Doctor of Philosophy, 2007 Directed By: Professor Philip Resnik, Department of Linguistics and Institute for Advanced Computer Studies Current interest in automatic sentiment analysis i motivated by a variety of information requirements. The vast majority of work in sentiment analysis has been specifically targeted at detecting subjective state ments and mining opinions. This dissertation focuses on a different but related pro blem that to date has received relatively little attention in NLP research: detect ing implicit sentiment , or spin, in text. This text classification task is distinguished from ther sentiment analysis work in that there is no assumption that the documents to b e classified with respect to sentiment are necessarily overt expressions of opin ion. They rather are documents that might reveal a perspective . This dissertation describes a novel approach to t e identification of implicit sentiment, motivated by ideas drawn from the literature on lexical semantics and argument structure, supported and refined through psycholinguistic experimentation. A relationship pr edictive of sentiment is established for components of meaning that are thou g t to be drivers of verbal argument selection and linking and to be arbiters o f what is foregrounded or backgrounded in discourse. In computational experim nts employing targeted lexical selection for verbs and nouns, a set of features re flective of these components of meaning is extracted for the terms. As observable p roxies for the underlying semantic components, these features are exploited using mach ine learning methods for text classification with respect to perspective. After i nitial experimentation with manually selected lexical resources, the method is generaliz d to require no manual selection or hand tuning of any kind. The robustness of this lin gu stically motivated method is demonstrated by successfully applying it to three d istinct text domains under a number of different experimental conditions, obtain ing the best classification accuracies yet reported for several sentiment class ification tasks. A novel graph-based classifier combination method is introduced which f urther improves classification accuracy by integrating statistical classifiers wit h models of inter-document relationships. SPIN: LEXICAL SEMANTICS, TRANSITIVITY, AND THE IDENTIFICATION OF IMPLICIT SENTIMENT",
"title": ""
},
{
"docid": "0df2ca944dcdf79369ef5a7424bf3ffe",
"text": "This article first presents two theories representing distinct approaches to the field of stress research: Selye's theory of `systemic stress' based in physiology and psychobiology, and the `psychological stress' model developed by Lazarus. In the second part, the concept of coping is described. Coping theories may be classified according to two independent parameters: traitoriented versus state-oriented, and microanalytic versus macroanalytic approaches. The multitude of theoretical conceptions is based on the macroanalytic, trait-oriented approach. Examples of this approach that are presented in this article are `repression–sensitization,' `monitoringblunting,' and the `model of coping modes.' The article closes with a brief outline of future perspectives in stress and coping research.",
"title": ""
},
{
"docid": "6ce2991a68c7d4d6467ff2007badbaf0",
"text": "This paper investigates acoustic models for automatic speech recognition (ASR) using deep neural networks (DNNs) whose input is taken directly from windowed speech waveforms (WSW). After demonstrating the ability of these networks to automatically acquire internal representations that are similar to mel-scale filter-banks, an investigation into efficient DNN architectures for exploiting WSW features is performed. First, a modified bottleneck DNN architecture is investigated to capture dynamic spectrum information that is not well represented in the time domain signal. Second,the redundancies inherent in WSW based DNNs are considered. The performance of acoustic models defined over WSW features is compared to that obtained from acoustic models defined over mel frequency spectrum coefficient (MFSC) features on the Wall Street Journal (WSJ) speech corpus. It is shown that using WSW features results in a 3.0 percent increase in WER relative to that resulting from MFSC features on the WSJ corpus. However, when combined with MFSC features, a reduction in WER of 4.1 percent is obtained with respect to the best evaluated MFSC based DNN acoustic model.",
"title": ""
},
{
"docid": "e91310da7635df27b5c4056388cc6e52",
"text": "This paper presents a new metric for automated registration of multi-modal sensor data. The metric is based on the alignment of the orientation of gradients formed from the two candidate sensors. Data registration is performed by estimating the sensors’ extrinsic parameters that minimises the misalignment of the gradients. The metric can operate in a large range of applications working on both 2D and 3D sensor outputs and is suitable for both (i) single scan data registration and (ii) multi-sensor platform calibration using multiple scans. Unlike traditional calibration methods, it does not require markers or other registration aids to be placed in the scene. The effectiveness of the new method is demonstrated with experimental results on a variety of camera-lidar and camera-camera calibration problems. The novel metric is validated through comparisons with state of the art methods. Our approach is shown to give high quality registrations under all tested conditions. C © 2014 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "35b286999957396e1f5cab6e2370ed88",
"text": "Text summarization condenses a text to a shorter version while retaining the important informations. Abstractive summarization is a recent development that generates new phrases, rather than simply copying or rephrasing sentences within the original text. Recently neural sequence-to-sequence models have achieved good results in the field of abstractive summarization, which opens new possibilities and applications for industrial purposes. However, most practitioners observe that these models still use large parts of the original text in the output summaries, making them often similar to extractive frameworks. To address this drawback, we first introduce a new metric to measure how much of a summary is extracted from the input text. Secondly, we present a novel method, that relies on a diversity factor in computing the neural network loss, to improve the diversity of the summaries generated by any neural abstractive model implementing beam search. Finally, we show that this method not only makes the system less extractive, but also improves the overall rouge score of state-of-the-art methods by at least 2 points.",
"title": ""
},
{
"docid": "1d60437cbd2cec5058957af291ca7cde",
"text": "e behavior of users in certain services could be a clue that can be used to infer their preferences and may be used to make recommendations for other services they have never used. However, the cross-domain relationships between items and user consumption paerns are not simple, especially when there are few or no common users and items across domains. To address this problem, we propose a content-based cross-domain recommendation method for cold-start users that does not require userand itemoverlap. We formulate recommendation as extreme multi-class classication where labels (items) corresponding to the users are predicted. With this formulation, the problem is reduced to a domain adaptation seing, in which a classier trained in the source domain is adapted to the target domain. For this, we construct a neural network that combines an architecture for domain adaptation, Domain Separation Network, with a denoising autoencoder for item representation. We assess the performance of our approach in experiments on a pair of data sets collected from movie and news services of Yahoo! JAPAN and show that our approach outperforms several baseline methods including a cross-domain collaborative ltering method.",
"title": ""
},
{
"docid": "a89c53f4fbe47e7a5e49193f0786cd6d",
"text": "Although hundreds of studies have documented the association between family poverty and children's health, achievement, and behavior, few measure the effects of the timing, depth, and duration of poverty on children, and many fail to adjust for other family characteristics (for example, female headship, mother's age, and schooling) that may account for much of the observed correlation between poverty and child outcomes. This article focuses on a recent set of studies that explore the relationship between poverty and child outcomes in depth. By and large, this research supports the conclusion that family income has selective but, in some instances, quite substantial effects on child and adolescent well-being. Family income appears to be more strongly related to children's ability and achievement than to their emotional outcomes. Children who live in extreme poverty or who live below the poverty line for multiple years appear, all other things being equal, to suffer the worst outcomes. The timing of poverty also seems to be important for certain child outcomes. Children who experience poverty during their preschool and early school years have lower rates of school completion than children and adolescents who experience poverty only in later years. Although more research is needed on the significance of the timing of poverty on child outcomes, findings to date suggest that interventions during early childhood may be most important in reducing poverty's impact on children.",
"title": ""
},
{
"docid": "5e75a46c36e663791db0f8b45f685cb6",
"text": "This study provides one of very few experimental investigations into the impact of a musical soundtrack on the video gaming experience. Participants were randomly assigned to one of three experimental conditions: game-with-music, game-without-music, or music-only. After playing each of three segments of The Lord of the Rings: The Two Towers (Electronic Arts, 2002)--or, in the music-only condition, listening to the musical score that accompanies the scene--subjects responded on 21 verbal scales. Results revealed that some, but not all, of the verbal scales exhibited a statistically significant difference due to the presence of a musical score. In addition, both gender and age level were shown to be significant factors for some, but not all, of the verbal scales. Details of the specific ways in which music affects the gaming experience are provided in the body of the paper.",
"title": ""
},
{
"docid": "8a293b95b931f4f72fe644fdfe30564a",
"text": "Today, the concept of brain connectivity plays a central role in the neuroscience. While functional connectivity is defined as the temporal coherence between the activities of different brain areas, the effective connectivity is defined as the simplest brain circuit that would produce the same temporal relationship as observed experimentally between cortical sites. The most used method to estimate effective connectivity in neuroscience is the structural equation modeling (SEM), typically used on data related to the brain hemodynamic behavior. However, the use of hemodynamic measures limits the temporal resolution on which the brain process can be followed. The present research proposes the use of the SEM approach on the cortical waveforms estimated from the high-resolution EEG data, which exhibits a good spatial resolution and a higher temporal resolution than hemodynamic measures. We performed a simulation study, in which different main factors were systematically manipulated in the generation of test signals, and the errors in the estimated connectivity were evaluated by the analysis of variance (ANOVA). Such factors were the signal-to-noise ratio and the duration of the simulated cortical activity. Since SEM technique is based on the use of a model formulated on the basis of anatomical and physiological constraints, different experimental conditions were analyzed, in order to evaluate the effect of errors made in the a priori model formulation on its performances. The feasibility of the proposed approach has been shown in a human study using high-resolution EEG recordings related to finger tapping movements.",
"title": ""
},
{
"docid": "3476246809afe4e6b7cef9bbbed1926e",
"text": "The aim of this study was to investigate the efficacy of a proposed new implant mediated drug delivery system (IMDDS) in rabbits. The drug delivery system is applied through a modified titanium implant that is configured to be implanted into bone. The implant is hollow and has multiple microholes that can continuously deliver therapeutic agents into the systematic body. To examine the efficacy and feasibility of the IMDDS, we investigated the pharmacokinetic behavior of dexamethasone in plasma after a single dose was delivered via the modified implant placed in the rabbit tibia. After measuring the plasma concentration, the areas under the curve showed that the IMDDS provided a sustained release for a relatively long period. The result suggests that the IMDDS can deliver a sustained release of certain drug components with a high bioavailability. Accordingly, the IMDDS may provide the basis for a novel approach to treating patients with chronic diseases.",
"title": ""
},
{
"docid": "bd21815804115f2c413265660a78c203",
"text": "Outsourcing, internationalization, and complexity characterize today's aerospace supply chains, making aircraft manufacturers structurally dependent on each other. Despite several complexity-related supply chain issues reported in the literature, aerospace supply chain structure has not been studied due to a lack of empirical data and suitable analytical toolsets for studying system structure. In this paper, we assemble a large-scale empirical data set on the supply network of Airbus and apply the new science of networks to analyze how the industry is structured. Our results show that the system under study is a network, formed by communities connected by hub firms. Hub firms also tend to connect to each other, providing cohesiveness, yet making the network vulnerable to disruptions in them. We also show how network science can be used to identify firms that are operationally critical and that are key to disseminating information.",
"title": ""
},
{
"docid": "dc207fb8426f468dde2cb1d804b33539",
"text": "This paper presents a webcam-based spherical coordinate conversion system using OpenCL massive parallel computing for panorama video image stitching. With multi-core architecture and its high-bandwidth data transmission rate of memory accesses, modern programmable GPU makes it possible to process multiple video images in parallel for real-time interaction. To get a panorama view of 360 degrees, we use OpenCL to stitch multiple webcam video images into a panorama image and texture mapped it to a spherical object to compose a virtual reality immersive environment. The experimental results show that when we use NVIDIA 9600GT to process eight 640×480 images, OpenCL can achieve ninety times speedups.",
"title": ""
},
{
"docid": "161c79eeb01624c497446cb2c51f3893",
"text": "In this article, results of a German nationwide survey (KFN schools survey 2007/2008) are presented. The controlled sample of 44,610 male and female ninth-graders was carried out in 2007 and 2008 by the Criminological Research Institute of Lower Saxony (KFN). According to a newly developed screening instrument (KFN-CSAS-II), which was presented to every third juvenile participant (N = 15,168), 3% of the male and 0.3% of the female students are diagnosed as dependent on video games. The data indicate a clear dividing line between extensive gaming and video game dependency (VGD) as a clinically relevant phenomenon. VGD is accompanied by increased levels of psychological and social stress in the form of lower school achievement, increased truancy, reduced sleep time, limited leisure activities, and increased thoughts of committing suicide. In addition, it becomes evident that personal risk factors are crucial for VGD. The findings indicate the necessity of additional research as well as the respective measures in the field of health care policies.",
"title": ""
}
] |
scidocsrr
|
80d5a1ee4c177058910ee7a708fe8dc3
|
Camera Model Identification Based on the Heteroscedastic Noise Model
|
[
{
"docid": "8055b2c65d5774000fe4fa81ff83efb7",
"text": "Changes in measured image irradiance have many physical causes and are the primary cue for several visual processes, such as edge detection and shape from shading. Using physical models for charged-coupled device ( C C D ) video cameras and material reflectance, we quantify the variation in digitized pixel values that is due to sensor noise and scene variation. This analysis forms the basis of algorithms for camera characterization and calibration and for scene description. Specifically, algorithms are developed for estimating the parameters of camera noise and for calibrating a camera to remove the effects of fixed pattern nonuniformity and spatial variation in dark current. While these techniques have many potential uses, we describe in particular how they can be used to estimate a measure of scene variation. This measure is independent of image irradiance and can be used to identify a surface from a single sensor band over a range of situations. Experimental results confirm that the models presented in this paper are useful for modeling the different sources of variation in real images obtained from video cameras. Index T e m s C C D cameras, computer vision, camera calibration, noise estimation, reflectance variation, sensor modeling.",
"title": ""
}
] |
[
{
"docid": "783c347d3d4f5a191508f005b362164b",
"text": "Workspace awareness is knowledge about others’ interaction with a shared workspace. Groupware systems provide only limited information about other participants, often compromising workspace awareness. This paper describes a usability study of several widgets designed to help maintain awareness in a groupware workspace. These widgets include a miniature view, a radar view, a multiuser scrollbar, a glance function, and a “what you see is what I do” view. The study examined the widgets’ information content, how easily people could interpret them, and whether they were useful or distracting. Observations, questionnaires, and interviews indicate that the miniature and radar displays are useful and valuable for tasks involving spatial manipulation of artifacts.",
"title": ""
},
{
"docid": "331391539cd5a226e9389f96f815fa0d",
"text": "Understanding protein function from amino acid sequence is a fundamental problem in biology. In this project, we explore how well we can represent biological function through examination of raw sequence alone. Using a large corpus of protein sequences and their annotated protein families, we learn dense vector representations for amino acid sequences using the co-occurrence statistics of short fragments. Then, using this representation, we experiment with several neural network architectures to train classifiers for protein family identification. We show good performance for a multi-class prediction problem with 589 protein family classes.",
"title": ""
},
{
"docid": "b94e096ea1bc990bd7c72aab988dd5ff",
"text": "The paper describes the design and implementation of an independent, third party contract monitoring service called Contract Compliance Checker (CCC). The CCC is provided with the specification of the contract in force, and is capable of observing and logging the relevant business-to-business (B2B) interaction events, in order to determine whether the actions of the business partners are consistent with the contract. A contract specification language called EROP (for Events, Rights, Obligations and Prohibitions) for the CCC has been developed based on business rules, that provides constructs to specify what rights, obligation and prohibitions become active and inactive after the occurrence of events related to the execution of business operations. The system has been designed to work with B2B industry standards such as ebXML and RosettaNet.",
"title": ""
},
{
"docid": "9b917dde9a9f9dcf8ed74fd0bb3a07cf",
"text": "We describe an ELECTRONIC SPEAKING GLOVE, designed to facilitate an easy communication through synthesized speech for the benefit of speechless patients. Generally, a speechless person communicates through sign language which is not understood by the majority of people. This final year project is designed to solve this problem. Gestures of fingers of a user of this glove will be converted into synthesized speech to convey an audible message to others, for example in a critical communication with doctors. The glove is internally equipped with multiple flex sensors that are made up of “bend-sensitive resistance elements”. For each specific gesture, internal flex sensors produce a proportional change in resistance of various elements. The processing of this information sends a unique set of signals to the AVR (Advance Virtual RISC) microcontroller which is preprogrammed to speak desired sentences.",
"title": ""
},
{
"docid": "7f8ee14d2d185798c3864178bd450f3d",
"text": "In this paper, a new sensing device that can simultaneously monitor traffic congestion and urban flash floods is presented. This sensing device is based on the combination of passive infrared sensors (PIRs) and ultrasonic rangefinder, and is used for real-time vehicle detection, classification, and speed estimation in the context of wireless sensor networks. This framework relies on dynamic Bayesian Networks to fuse heterogeneous data both spatially and temporally for vehicle detection. To estimate the speed of the incoming vehicles, we first use cross correlation and wavelet transform-based methods to estimate the time delay between the signals of different sensors. We then propose a calibration and self-correction model based on Bayesian Networks to make a joint inference by all sensors about the speed and the length of the detected vehicle. Furthermore, we use the measurements of the ultrasonic and the PIR sensors to perform vehicle classification. Validation data (using an experimental dual infrared and ultrasonic traffic sensor) show a 99% accuracy in vehicle detection, a mean error of 5 kph in vehicle speed estimation, a mean error of 0.7m in vehicle length estimation, and a high accuracy in vehicle classification. Finally, we discuss the computational performance of the algorithm, and show that this framework can be implemented on low-power computational devices within a wireless sensor network setting. Such decentralized processing greatly improves the energy consumption of the system and minimizes bandwidth usage.",
"title": ""
},
{
"docid": "3ed0e387f8e6a8246b493afbb07a9312",
"text": "Van den Ende-Gupta Syndrome (VDEGS) is an autosomal recessive disorder characterized by blepharophimosis, distinctive nose, hypoplastic maxilla, and skeletal abnormalities. Using homozygosity mapping in four VDEGS patients from three consanguineous families, Anastacio et al. [Anastacio et al. (2010); Am J Hum Genet 87:553-559] identified homozygous mutations in SCARF2, located at 22q11.2. Bedeschi et al. [2010] described a VDEGS patient with sclerocornea and cataracts with compound heterozygosity for the common 22q11.2 microdeletion and a hemizygous SCARF2 mutation. Because sclerocornea had been described in DiGeorge-velo-cardio-facial syndrome but not in VDEGS, they suggested that the ocular abnormalities were caused by the 22q11.2 microdeletion. We report on a 23-year-old male who presented with bilateral sclerocornea and the VDGEGS phenotype who was subsequently found to be homozygous for a 17 bp deletion in exon 4 of SCARF2. The occurrence of bilateral sclerocornea in our patient together with that of Bedeschi et al., suggests that the full VDEGS phenotype may include sclerocornea resulting from homozygosity or compound heterozygosity for loss of function variants in SCARF2.",
"title": ""
},
{
"docid": "b9546d8f52b19ba99bb9c8f4dc62f2bd",
"text": "One of the main unresolved problems that arise during the data mining process is treating data that contains temporal information. In this case, a complete understanding of the entire phenomenon requires that the data should be viewed as a sequence of events. Temporal sequences appear in a vast range of domains, from engineering, to medicine and finance, and the ability to model and extract information from them is crucial for the advance of the information society. This paper provides a survey on the most significant techniques developed in the past ten years to deal with temporal sequences.",
"title": ""
},
{
"docid": "2085662af2d74d31756674bac9e6a2a7",
"text": "Deep Learning (DL) algorithms have become the de facto choice for data analysis. Several DL implementations – primarily limited to a single compute node – such as Caffe, TensorFlow, Theano and Torch have become readily available. Distributed DL implementations capable of execution on large scale systems are becoming important to address the computational needs of large data produced by scientific simulations and experiments. Yet, the adoption of distributed DL implementations faces significant impediments: 1) most implementations require DL analysts to modify their code significantly – which is a showstopper, 2) several distributed DL implementations are geared towards cloud computing systems – which is inadequate for execution on massively parallel systems such as supercomputers. This work addresses each of these problems. We provide a distributed memory DL implementation by incorporating required changes in the TensorFlow runtime itself. This dramatically reduces the entry barrier for using a distributed TensorFlow implementation. We use Message Passing Interface (MPI) – which provides performance portability, especially since MPI specific changes are abstracted from users. Lastly – and arguably most importantly – we make our implementation available for broader use, under the umbrella of Machine Learning Toolkit for Extreme Scale (MaTEx) at http://hpc.pnl.gov/matex. We refer to our implementation as MaTEx-TensorFlow.",
"title": ""
},
{
"docid": "8840e9e1e304a07724dd6e6779cfc9c4",
"text": "Clustering has become an increasingly important task in modern application domains such as marketing and purchasing assistance, multimedia, molecular biology as well as many others. In most of these areas, the data are originally collected at different sites. In order to extract information from these data, they are merged at a central site and then clustered. In this paper, we propose a different approach. We cluster the data locally and extract suitable representatives from these clusters. These representatives are sent to a global server site where we restore the complete clustering based on the local representatives. This approach is very efficient, because the local clustering can be carried out quickly and independently from each other. Furthermore, we have low transmission cost, as the number of transmitted representatives is much smaller than the cardinality of the complete data set. Based on this small number of representatives, the global clustering can be done very efficiently. For both the local and the global clustering, we use a density based clustering algorithm. The combination of both the local and the global clustering forms our new DBDC (Density Based Distributed Clustering) algorithm. Furthermore, we discuss the complex problem of finding a suitable quality measure for evaluating distributed clusterings. We introduce two quality criteria which are compared to each other and which allow us to evaluate the quality of our DBDC algorithm. In our experimental evaluation, we will show that we do not have to sacrifice clustering quality in order to gain an efficiency advantage when using our distributed clustering approach.",
"title": ""
},
{
"docid": "17fcb38734d6525f2f0fa3ee6c313b43",
"text": "The increasing generation and collection of personal data h as created a complex ecosystem, often collaborative but som etimes combative, around companies and individuals engaging in th e use of these data. We propose that the interactions between these agents warrants a new topic of study: Human-Data Inter action (HDI). In this paper we discuss how HDI sits at the intersection of various disciplines, including computer s cience, statistics, sociology, psychology and behavioura l economics. We expose the challenges that HDI raises, organised into thr ee core themes of legibility, agency and negotiability, and we present the HDI agenda to open up a dialogue amongst interest ed parties in the personal and big data ecosystems.",
"title": ""
},
{
"docid": "72108944c9dfbb4a50da07aea41d22f5",
"text": "This study examined the perception of drug abuse amongst Nigerian undergraduates living off-campus. Students were surveyed at the Lagos State University, Ojo, allowing for a diverse sample that included a large percentage of the students from different faculties and departments. The undergraduate students were surveyed with a structured self-reporting anonymous questionnaire modified and adapted from the WHO student drug survey proforma. Of the 1000 students surveyed, a total of 807 responded to the questionnaire resulting in 80.7% response rate. Majority (77.9%) of the students were aged 19-30 years and unmarried. Six hundred and ninety eight (86.5%) claimed they were aware of drug abuse, but contrarily they demonstrated poor knowledge and awareness. Marijuana, 298 (45.7%) was the most common drug of abuse seen by most of the students. They were unable to identify very well the predisposing factors to drug use and the attending risks. Two hundred and sixty six (33.0%) students were currently taking one or more drugs of abuse. Coffee (43.1%) was the most commonly used drug, followed by alcohol (25.8%) and marijuana (7.4%). Despite chronic use of these drugs (5 years and above), addiction is not a common finding. The study also revealed the poor attitudes of the undergraduates to drug addicts even after rehabilitation. It was therefore concluded that the awareness, knowledge, practices and attitudes of Nigerian undergraduates towards drug abuse is very poor. Considerably more research is needed to develop effective prevention strategy that combines school-based interventions with those affecting the family, social institutions and the larger community.",
"title": ""
},
{
"docid": "1f700c0c55b050db7c760f0c10eab947",
"text": "Cathy O’Neil’s Weapons of Math Destruction is a timely reminder of the power and perils of predictive algorithms and model-driven decision processes. The book deals in some depth with eight case studies of the abuses she associates with WMDs: “weapons of math destruction.” The cases include the havoc wrought by value-added models used to evaluate teacher performance and by the college ranking system introduced by U.S. News and World Report; the collateral damage of online advertising and models devised to track and monetize “eyeballs”; the abuses associated with the recidivism models used in judicial decisions; the inequities perpetrated by the use of personality tests in hiring decisions; the burdens placed on low-wage workers by algorithm-driven attempts to maximize labor efficiency; the injustices written into models that evaluate creditworthiness; the inequities produced by insurance companies’ risk models; and the potential assault on the democratic process by the use of big data in political campaigns. As this summary suggests, O’Neil had plenty of examples to choose from when she wrote the book, but since the publication of Weapons of Math Destruction, two more problems associated with model-driven decision procedures have surfaced, making O’Neil’s work even more essential reading. The first—the role played by fake news, much of it circulated on Facebook, in the 2016 election—has led to congressional investigations. The second—the failure of algorithm-governed oversight to recognize and delete gruesome posts on the Facebook Live streaming service—has caused CEO Mark Zuckerberg to announce the addition of 3,000 human screeners to the Facebook staff. While O’Neil’s book may seem too polemical to some readers and too cautious to others, it speaks forcefully to the cultural moment we share. O’Neil weaves the story of her own credentials and work experience into her analysis, because, as she explains, her training as a mathematician and her experience in finance shaped the way she now understands the world. O’Neil earned a PhD in mathematics from Harvard; taught at Barnard College, where her research area was algebraic number theory; and worked for the hedge fund D. E. Shaw, which uses mathematical analysis to guide investment decisions. When the financial crisis of 2008 revealed that even the most sophisticated models were incapable of anticipating risks associated with “black swans”—events whose rarity make them nearly impossible to predict—O’Neil left the world of corporate finance to join the RiskMetrics Group, where she helped market risk models to financial institutions eager to rehabilitate their image. Ultimately, she became disillusioned with the financial industry’s refusal to take seriously the limitations of risk management models and left RiskMetrics. She rebranded herself a “data scientist” and took a job at Intent Media, where she helped design algorithms that would make big data useful for all kinds of applications. All the while, as O’Neil describes it, she “worried about the separation between technical models and real people, and about the moral repercussions of that separation” (page 48). O’Neil eventually left Intent Media to devote her energies to inWeapons of Math Destruction",
"title": ""
},
{
"docid": "d950407cfcbc5457b299e05c8352107e",
"text": "Pedicle screw instrumentation in AIS has advantages of rigid fixation, improved deformity correction and a shorter fusion, but needs an exacting technique. The author has been using the K-wire method with intraoperative single PA and lateral radiographs, because it is safe, accurate and fast. Pedicle screws are inserted in every segment on the correction side (thoracic concave) and every 2–3 on the supportive side (thoracic convex). After an over-bent rod is inserted on the corrective side, the rod is rotated 90° counterclockwise. This maneuver corrects the coronal and sagittal curves. Then the vertebra is derotated by direct vertebral rotation (DVR) correcting the rotational deformity. The direction of DVR should be opposite to that of the vertebral rotation. A rigid rod has to be used to prevent the rod from straightening out during the rod derotation and DVR. The ideal classification of AIS should address all curve patterns, predicts accurate fusion extent and have good inter/intraobserver reliability. The Suk classification matches the ideal classification is simple and memorable, and has only four structural curve patterns; single thoracic, double thoracic, double major and thoracolumbar/lumbar. Each curve has two types, A and B. When using pedicle screws in thoracic AIS, curves are usually fused from upper neutral to lower neutral vertebra. Identification of the end vertebra and the neutral vertebra is important in deciding the fusion levels and the direction of DVR. In lumbar AIS, fusion is performed from upper neutral vertebra to L3 or L4 depending on its curve types. Rod derotation and DVR using pedicle screw instrumentation give true three dimensional deformity correction in the treatment of AIS. Suk classification with these methods predicts exact fusion extent and is easy to understand and remember.",
"title": ""
},
{
"docid": "feb672a16dd86db24e8d3700cf507bf9",
"text": "In this paper we propose an efficient method to calculate a highquality depth map from a single raw image captured by a light field or plenoptic camera. The proposed model combines the main idea of Active Wavefront Sampling (AWS) with the light field technique, i.e. we extract so-called sub-aperture images out of the raw image of a plenoptic camera, in such a way that the virtual view points are arranged on circles around a fixed center view. By tracking an imaged scene point over a sequence of sub-aperture images corresponding to a common circle, one can observe a virtual rotation of the scene point on the image plane. Our model is able to measure a dense field of these rotations, which are inversely related to the scene depth.",
"title": ""
},
{
"docid": "8996068836559be2b253cd04aeaa285b",
"text": "We present AutonoVi-Sim, a novel high-fidelity simulation platform for autonomous driving data generation and driving strategy testing. AutonoVi-Sim is a collection of high-level extensible modules which allows the rapid development and testing of vehicle configurations and facilitates construction of complex traffic scenarios. Autonovi-Sim supports multiple vehicles with unique steering or acceleration limits, as well as unique tire parameters and dynamics profiles. Engineers can specify the specific vehicle sensor systems and vary time of day and weather conditions to generate robust data and gain insight into how conditions affect the performance of a particular algorithm. In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians, allowing engineers to specify routes for these actors, or to create scripted scenarios which place the vehicle in dangerous reactive situations. Autonovi-Sim facilitates training of deep-learning algorithms by enabling data export from the vehicle's sensors, including camera data, LIDAR, relative positions of traffic participants, and detection and classification results. Thus, AutonoVi-Sim allows for the rapid prototyping, development and testing of autonomous driving algorithms under varying vehicle, road, traffic, and weather conditions. In this paper, we detail the simulator and provide specific performance and data benchmarks.",
"title": ""
},
{
"docid": "cf0d0d6895a5e5fbe1eb72e82b4d8b4b",
"text": "PURPOSE\nThe purpose of this study was twofold: (a) to determine the prevalence of compassion satisfaction, compassion fatigue, and burnout in emergency department nurses throughout the United States and (b) to examine which demographic and work-related components affect the development of compassion satisfaction, compassion fatigue, and burnout in this nursing specialty.\n\n\nDESIGN AND METHODS\nThis was a nonexperimental, descriptive, and predictive study using a self-administered survey. Survey packets including a demographic questionnaire and the Professional Quality of Life Scale version 5 (ProQOL 5) were mailed to 1,000 selected emergency nurses throughout the United States. The ProQOL 5 scale was used to measure the prevalence of compassion satisfaction, compassion fatigue, and burnout among emergency department nurses. Multiple regression using stepwise solution was employed to determine which variables of demographics and work-related characteristics predicted the prevalence of compassion satisfaction, compassion fatigue, and burnout. The α level was set at .05 for statistical significance.\n\n\nFINDINGS\nThe results revealed overall low to average levels of compassion fatigue and burnout and generally average to high levels of compassion satisfaction among this group of emergency department nurses. The low level of manager support was a significant predictor of higher levels of burnout and compassion fatigue among emergency department nurses, while a high level of manager support contributed to a higher level of compassion satisfaction.\n\n\nCONCLUSIONS\nThe results may serve to help distinguish elements in emergency department nurses' work and life that are related to compassion satisfaction and may identify factors associated with higher levels of compassion fatigue and burnout.\n\n\nCLINICAL RELEVANCE\nImproving recognition and awareness of compassion satisfaction, compassion fatigue, and burnout among emergency department nurses may prevent emotional exhaustion and help identify interventions that will help nurses remain empathetic and compassionate professionals.",
"title": ""
},
{
"docid": "75c5d060d99058585292a77a94e75dba",
"text": "In this paper, the recent progress of synaptic electronics is reviewed. The basics of biological synaptic plasticity and learning are described. The material properties and electrical switching characteristics of a variety of synaptic devices are discussed, with a focus on the use of synaptic devices for neuromorphic or brain-inspired computing. Performance metrics desirable for large-scale implementations of synaptic devices are illustrated. A review of recent work on targeted computing applications with synaptic devices is presented.",
"title": ""
},
{
"docid": "c32a719ac619e7a48adf12fd6a534e7c",
"text": "Using smart devices and apps in clinical trials has great potential: this versatile technology is ubiquitously available, broadly accepted, user friendly and it offers integrated sensors for primary data acquisition and data sending features to allow for a hassle free communication with the study sites. This new approach promises to increase efficiency and to lower costs. This article deals with the ethical and legal demands of using this technology in clinical trials with respect to regulation, informed consent, data protection and liability.",
"title": ""
},
{
"docid": "66fce3b6c516a4fa4281d19d6055b338",
"text": "This paper presents the mechatronic design and experimental validation of a novel powered knee-ankle orthosis for testing torque-driven rehabilitation control strategies. The modular actuator of the orthosis is designed with a torque dense motor and a custom low-ratio transmission (24:1) to provide mechanical transparency to the user, allowing them to actively contribute to their joint kinematics during gait training. The 4.88 kg orthosis utilizes frameless components and light materials, such as aluminum alloy and carbon fiber, to reduce its mass. A human subject experiment demonstrates accurate torque control with high output torque during stance and low backdrive torque during swing at fast walking speeds. This work shows that backdrivability, precise torque control, high torque output, and light weight can be achieved in a powered orthosis without the high cost and complexity of variable transmissions, clutches, and/or series elastic components.",
"title": ""
},
{
"docid": "7997cc6aafd50c7ec559270ff69e5d66",
"text": "Cloud computing adoption and diffusion are threatened by unresolved security issues that affect both the cloud provider and the cloud user. In this paper, we show how virtualization can increase the security of cloud computing, by protecting both the integrity of guest virtual machines and the cloud infrastructure components. In particular, we propose a novel architecture, Advanced Cloud Protection System (ACPS), aimed at guaranteeing increased security to cloud resources. ACPS can be deployed on several cloud solutions and can effectively monitor the integrity of guest and infrastructure components while remaining fully transparent to virtual machines and to cloud users. ACPS can locally react to security breaches as well as notify a further security management layer of such events. A prototype of our ACPS proposal is fully implemented on two current open source solutions: Eucalyptus and OpenECP. The prototype is tested against effectiveness and performance. In particular: (a) effectiveness is shown testing our prototype against attacks known in the literature; (b) performance evaluation of the ACPS prototype is carried out under different types of workload. Results show that our proposal is resilient against attacks and that the introduced overhead is small when compared to the provided",
"title": ""
}
] |
scidocsrr
|
99051e983f91eea7b2e3c66f305c2d63
|
Machine Recognition of Music Emotion: A Review
|
[
{
"docid": "c692dd35605c4af62429edef6b80c121",
"text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.",
"title": ""
}
] |
[
{
"docid": "502d31f5f473f3e93ee86bdfd79e0d75",
"text": "The call-by-need lambda calculus provides an equational framework for reasoning syntactically about lazy evaluation. This paper examines its operational characteristics.\n By a series of reasoning steps, we systematically unpack the standard-order reduction relation of the calculus and discover a novel abstract machine definition which, like the calculus, goes \"under lambdas.\" We prove that machine evaluation is equivalent to standard-order evaluation.\n Unlike traditional abstract machines, delimited control plays a significant role in the machine's behavior. In particular, the machine replaces the manipulation of a heap using store-based effects with disciplined management of the evaluation stack using control-based effects. In short, state is replaced with control.\n To further articulate this observation, we present a simulation of call-by-need in a call-by-value language using delimited control operations.",
"title": ""
},
{
"docid": "5666b1a6289f4eac05531b8ff78755cb",
"text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.",
"title": ""
},
{
"docid": "8c3ecd27a695fef2d009bbf627820a0d",
"text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.",
"title": ""
},
{
"docid": "b825426604420620e1bba43c0f45115e",
"text": "Taxonomies are the backbone of many structured, semantic knowledge resources. Recent works for extracting taxonomic relations from text focused on collecting lexical-syntactic patterns to extract the taxonomic relations by matching the patterns to text. These approaches, however, often show low coverage due to the lack of contextual analysis across sentences. To address this issue, we propose a novel approach that collectively utilizes contextual information of terms in syntactic structures such that if the set of contexts of a term includes most of contexts of another term, a subsumption relation between the two terms is inferred. We apply this method to the task of taxonomy construction from scratch, where we introduce another novel graph-based algorithm for taxonomic structure induction. Our experiment results show that the proposed method is well complementary with previous methods of linguistic pattern matching and significantly improves recall and thus F-measure.",
"title": ""
},
{
"docid": "44ff9580f0ad6321827cf3f391a61151",
"text": "This paper aims to evaluate the aesthetic visual quality of a special type of visual media: digital images of paintings. Assessing the aesthetic visual quality of paintings can be considered a highly subjective task. However, to some extent, certain paintings are believed, by consensus, to have higher aesthetic quality than others. In this paper, we treat this challenge as a machine learning problem, in order to evaluate the aesthetic quality of paintings based on their visual content. We design a group of methods to extract features to represent both the global characteristics and local characteristics of a painting. Inspiration for these features comes from our prior knowledge in art and a questionnaire survey we conducted to study factors that affect human's judgments. We collect painting images and ask human subjects to score them. These paintings are then used for both training and testing in our experiments. Experimental results show that the proposed work can classify high-quality and low-quality paintings with performance comparable to humans. This work provides a machine learning scheme for the research of exploring the relationship between aesthetic perceptions of human and the computational visual features extracted from paintings.",
"title": ""
},
{
"docid": "616bd9a0599c2039ca6d32fd855b43da",
"text": "A new software-based liveness detection approach using a novel fingerprint parameterization based on quality related features is proposed. The system is tested on a highly challenging database comprising over 10,500 real and fake images acquired with five sensors of different technologies and covering a wide range of direct attack scenarios in terms of materials and procedures followed to generate the gummy fingers. The proposed solution proves to be robust to the multi-scenario dataset, and presents an overall rate of 90% correctly classified samples. Furthermore, the liveness detection method presented has the added advantage over previously studied techniques of needing just one image from a finger to decide whether it is real or fake. This last characteristic provides the method with very valuable features as it makes it less intrusive, more user friendly, faster and reduces its implementation costs.",
"title": ""
},
{
"docid": "4d11eca5601f5128801a8159a154593a",
"text": "Polymorphic malware belong to the class of host based threats which defy signature based detection mechanisms. Threat actors use various code obfuscation methods to hide the code details of the polymorphic malware and each dynamic iteration of the malware bears different and new signatures therefore makes its detection harder by signature based antimalware programs. Sandbox based detection systems perform syntactic analysis of the binary files to find known patterns from the un-encrypted segment of the malware file. Anomaly based detection systems can detect polymorphic threats but generate enormous false alarms. In this work, authors present a novel cognitive framework using semantic features to detect the presence of polymorphic malware inside a Microsoft Windows host using a process tree based temporal directed graph. Fractal analysis is performed to find cognitively distinguishable patterns of the malicious processes containing polymorphic malware executables. The main contributions of this paper are; the presentation of a graph theoretic approach for semantic characterization of polymorphism in the operating system's process tree, and the cognitive feature extraction of the polymorphic behavior for detection over a temporal process space.",
"title": ""
},
{
"docid": "f23316e66118193da4c6f166edfae6c0",
"text": "We present the first unsupervised approach for semantic parsing that rivals the accuracy of supervised approaches in translating natural-language questions to database queries. Our GUSP system produces a semantic parse by annotating the dependency-tree nodes and edges with latent states, and learns a probabilistic grammar using EM. To compensate for the lack of example annotations or question-answer pairs, GUSP adopts a novel grounded-learning approach to leverage database for indirect supervision. On the challenging ATIS dataset, GUSP attained an accuracy of 84%, effectively tying with the best published results by supervised approaches.",
"title": ""
},
{
"docid": "5ee21318b1601a1d42162273a7c9026c",
"text": "We used a knock-in strategy to generate two lines of mice expressing Cre recombinase under the transcriptional control of the dopamine transporter promoter (DAT-cre mice) or the serotonin transporter promoter (SERT-cre mice). In DAT-cre mice, immunocytochemical staining of adult brains for the dopamine-synthetic enzyme tyrosine hydroxylase and for Cre recombinase revealed that virtually all dopaminergic neurons in the ventral midbrain expressed Cre. Crossing DAT-cre mice with ROSA26-stop-lacZ or ROSA26-stop-YFP reporter mice revealed a near perfect correlation between staining for tyrosine hydroxylase and beta-galactosidase or YFP. YFP-labeled fluorescent dopaminergic neurons could be readily identified in live slices. Crossing SERT-cre mice with the ROSA26-stop-lacZ or ROSA26-stop-YFP reporter mice similarly revealed a near perfect correlation between staining for serotonin-synthetic enzyme tryptophan hydroxylase and beta-galactosidase or YFP. Additional Cre expression in the thalamus and cortex was observed, reflecting the known pattern of transient SERT expression during early postnatal development. These findings suggest a general strategy of using neurotransmitter transporter promoters to drive selective Cre expression and thus control mutations in specific neurotransmitter systems. Crossed with fluorescent-gene reporters, this strategy tags neurons by neurotransmitter status, providing new tools for electrophysiology and imaging.",
"title": ""
},
{
"docid": "b73f0b44786330a363bbbcbb71c63219",
"text": "In the third shared task of the Computational Approaches to Linguistic CodeSwitching (CALCS) workshop, we focus on Named Entity Recognition (NER) on code-switched social-media data. We divide the shared task into two competitions based on the English-Spanish (ENG-SPA) and Modern Standard ArabicEgyptian (MSA-EGY) language pairs. We use Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks. In addition to the CS phenomenon, the diversity of the entities and the social media challenges make the task considerably hard to process. As a result, the best scores of the competitions are 63.76% and 71.61% for ENG-SPA and MSA-EGY, respectively. We present the scores of 9 participants and discuss the most common challenges among submissions.",
"title": ""
},
{
"docid": "1128977e3831283b900f7d1c344f6713",
"text": "In this work, we present a framework to capture 3D models of faces in high resolutions with low computational load. The system captures only two pictures of the face, one illuminated with a colored stripe pattern and one with regular white light. The former is needed for the depth calculation, the latter is used as texture. Having these two images a combination of specialized algorithms is applied to generate a 3D model. The results are shown in different views: simple surface, wire grid respective polygon mesh or textured 3D surface.",
"title": ""
},
{
"docid": "79ad27cffbbcbe3a49124abd82c6e477",
"text": "In this paper we address the following problem in web document and information retrieval (IR): How can we use long-term context information to gain better IR performance? Unlike common IR methods that use bag of words representation for queries and documents, we treat them as a sequence of words and use long short term memory (LSTM) to capture contextual dependencies. To the best of our knowledge, this is the first time that LSTM is applied to information retrieval tasks. Unlike training traditional LSTMs, the training strategy is different due to the special nature of information retrieval problem. Experimental evaluation on an IR task derived from the Bing web search demonstrates the ability of the proposed method in addressing both lexical mismatch and long-term context modelling issues, thereby, significantly outperforming existing state of the art methods for web document retrieval task.",
"title": ""
},
{
"docid": "67808f54305bc2bb2b3dd666f8b4ef42",
"text": "Sensing devices are becoming the source of a large portion of the Web data. To facilitate the integration of sensed data with data from other sources, both sensor stream sources and data are being enriched with semantic descriptions, creating Linked Stream Data. Despite its enormous potential, little has been done to explore Linked Stream Data. One of the main characteristics of such data is its “live” nature, which prohibits existing Linked Data technologies to be applied directly. Moreover, there is currently a lack of tools to facilitate publishing Linked Stream Data and making it available to other applications. To address these issues we have developed the Linked Stream Middleware (LSM), a platform that brings together the live real world sensed data and the Semantic Web. A LSM deployment is available at http://lsm.deri.ie/. It provides many functionalities such as: i) wrappers for real time data collection and publishing; ii) a web interface for data annotation and visualisation; and iii) a SPARQL endpoint for querying unified Linked Stream Data and Linked Data. In this paper we describe the system architecture behind LSM, provide details how Linked Stream Data is generated, and demonstrate the benefits of the platform by showcasing its interface.",
"title": ""
},
{
"docid": "b6da971f13c1075ce1b4aca303e7393f",
"text": "In this paper, we evaluate the generalization power of deep features (ConvNets) in two new scenarios: aerial and remote sensing image classification. We evaluate experimentally ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images. ConvNets obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. We also present a correlation analysis, showing the potential for combining/fusing different ConvNets with other descriptors or even for combining multiple ConvNets. A preliminary set of experiments fusing ConvNets obtains state-of-the-art results for the well-known UCMerced dataset.",
"title": ""
},
{
"docid": "9bff76e87f4bfa3629e38621060050f7",
"text": "Non-textual components such as charts, diagrams and tables provide key information in many scientific documents, but the lack of large labeled datasets has impeded the development of data-driven methods for scientific figure extraction. In this paper, we induce high-quality training labels for the task of figure extraction in a large number of scientific documents, with no human intervention. To accomplish this we leverage the auxiliary data provided in two large web collections of scientific documents (arXiv and PubMed) to locate figures and their associated captions in the rasterized PDF. We share the resulting dataset of over 5.5 million induced labels---4,000 times larger than the previous largest figure extraction dataset---with an average precision of 96.8%, to enable the development of modern data-driven methods for this task. We use this dataset to train a deep neural network for end-to-end figure detection, yielding a model that can be more easily extended to new domains compared to previous work. The model was successfully deployed in Semantic Scholar,\\footnote\\urlhttps://www.semanticscholar.org/ a large-scale academic search engine, and used to extract figures in 13 million scientific documents.\\footnoteA demo of our system is available at \\urlhttp://labs.semanticscholar.org/deepfigures/,and our dataset of induced labels can be downloaded at \\urlhttps://s3-us-west-2.amazonaws.com/ai2-s2-research-public/deepfigures/jcdl-deepfigures-labels.tar.gz. Code to run our system locally can be found at \\urlhttps://github.com/allenai/deepfigures-open.",
"title": ""
},
{
"docid": "9dad87b0134d9f165b0208baf40c7f0f",
"text": "Frequent Itemset Mining (FIM) is the most important and time-consuming step of association rules mining. With the increment of data scale, many efficient single-machine algorithms of FIM, such as FP-growth and Apriori, cannot accomplish the computing tasks within reasonable time. As a result of the limitation of single-machine methods, researchers presented some distributed algorithms based on MapReduce and Spark, such as PFP and YAFIM. Nevertheless, the heavy disk I/O cost at each MapReduce operation makes PFP not efficient enough. YAFIM needs to generate candidate frequent itemsets in each iterative step. It makes YAFIM time-consuming. And if the scale of data is large enough, YAFIM algorithm will not work due to the limitation of memory since the candidate frequent itemsets need to be stored in the memory. And the size of candidate itemsets is very large especially facing the massive data. In this work, we propose a distributed FP-growth algorithm based on Spark, we call it DFPS. DFPS partitions computing tasks in such a way that each computing node builds the conditional FP-tree and adopts a pattern fragment growth method to mine the frequent itemsets independently. DFPS doesn't need to pass messages between nodes during mining frequent itemsets. Our performance study shows that DFPS algorithm is more excellent than YAFIM, especially when the length of transactions is long, the number of items is large and the data is massive. And DFPS has an excellent scalability. The experimental results show that DFPS is more than 10 times faster than YAFIM for T10I4D100K dataset and Pumsb_star dataset.",
"title": ""
},
{
"docid": "3c8d59590b328e0b4ab6b856721009aa",
"text": "Mobile augmented reality (MAR) enabled devices have the capability to present a large amount of information in real time, based on sensors that determine proximity, visual reference, maps, and detailed information on the environment. Location and proximity technologies combined with detailed mapping allow effective navigation. Visual analysis software and growing image databases enable object recognition. Advanced graphics capabilities bring sophisticated presentation of the user interface. These capabilities together allow for real-time melding of the physical and the virtual worlds and can be used for information overlay of the user’s environment for various purposes such as entertainment, tourist assistance, navigation assistance, and education [ 1 ] . In designing for MAR applications it is very important to understand the context in which the information has to be presented. Past research on information presentation on small form factor computing has highlighted the importance of presenting the right information in the right way to effectively engage the user [ 2– 4 ] . The screen space that is available on a small form factor is limited, and having augmented information presented as an overlay poses very interesting challenges. MAR usages involve devices that are able to perceive the context of the user based on the location and other sensor based information. In their paper on “ContextAware Pervasive Systems: Architectures for a New Breed of Applications”, Loke [ 5 ] ,",
"title": ""
},
{
"docid": "87a14f9cfdec433672095c2b0d9b9dde",
"text": "This paper discusses a comprehensive suite of experiments that analyze the performance of the random forest (RF) learner implemented in Weka. RF is a relatively new learner, and to the best of our knowledge, only preliminary experimentation on the construction of random forest classifiers in the context of imbalanced data has been reported in previous work. Therefore, the contribution of this study is to provide an extensive empirical evaluation of RF learners built from imbalanced data. What should be the recommended default number of trees in the ensemble? What should the recommended value be for the number of attributes? How does the RF learner perform on imbalanced data when compared with other commonly-used learners? We address these and other related issues in this work.",
"title": ""
},
{
"docid": "34dfcc1e7744afb236f14b5804214c40",
"text": "This paper presents a vision-based real-time gaze zone estimator based on a driver's head orientation composed of yaw and pitch. Generally, vision-based methods are vulnerable to the wearing of eyeglasses and image variations between day and night. The proposed method is novel in the following four ways: First, the proposed method can work under both day and night conditions and is robust to facial image variation caused by eyeglasses because it only requires simple facial features and not specific features such as eyes, lip corners, and facial contours. Second, an ellipsoidal face model is proposed instead of a cylindrical face model to exactly determine a driver's yaw. Third, we propose new features-the normalized mean and the standard deviation of the horizontal edge projection histogram-to reliably and rapidly estimate a driver's pitch. Fourth, the proposed method obtains an accurate gaze zone by using a support vector machine. Experimental results from 200 000 images showed that the root mean square errors of the estimated yaw and pitch angles are below 7 under both daylight and nighttime conditions. Equivalent results were obtained for drivers with glasses or sunglasses, and 18 gaze zones were accurately estimated using the proposed gaze estimation method.",
"title": ""
},
{
"docid": "d507fc48f5d2500251b72cb2ebc94d40",
"text": "We investigate the extent to which social ties between people can be inferred from co-occurrence in time and space: Given that two people have been in approximately the same geographic locale at approximately the same time, on multiple occasions, how likely are they to know each other? Furthermore, how does this likelihood depend on the spatial and temporal proximity of the co-occurrences? Such issues arise in data originating in both online and offline domains as well as settings that capture interfaces between online and offline behavior. Here we develop a framework for quantifying the answers to such questions, and we apply this framework to publicly available data from a social media site, finding that even a very small number of co-occurrences can result in a high empirical likelihood of a social tie. We then present probabilistic models showing how such large probabilities can arise from a natural model of proximity and co-occurrence in the presence of social ties. In addition to providing a method for establishing some of the first quantifiable estimates of these measures, our findings have potential privacy implications, particularly for the ways in which social structures can be inferred from public online records that capture individuals' physical locations over time.",
"title": ""
}
] |
scidocsrr
|
eaae75ea41536abc581cd11693810975
|
Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
|
[
{
"docid": "3223563162967868075a43ca86c1d31a",
"text": "Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts. Although the study of deep learning has already led to impressive theoretical results, learning algorithms and breakthrough experiments, several challenges lie ahead. This paper proposes to examine some of these challenges, centering on the questions of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data. It also proposes a few forward-looking research directions aimed at overcoming these",
"title": ""
},
{
"docid": "cff671af6a7a170fac2daf6acd9d1e3e",
"text": "We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and gi ve a much better representation of each document than Latent Sem antic Analysis. When the deepest layer is forced to use a small numb er of binary variables (e.g. 32), the graphical model performs “semantic hashing”: Documents are mapped to memory addresses in such a way that semantically similar documents are located at near by ddresses. Documents similar to a query document can then be fo und by simply accessing all the addresses that differ by only a fe w bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much fa ster than locality sensitive hashing, which is the fastest curre nt method. By using semantic hashing to filter the documents given to TFID , we achieve higher accuracy than applying TF-IDF to the entir document set.",
"title": ""
}
] |
[
{
"docid": "df7a68ebb9bc03d8a73a54ab3474373f",
"text": "We report on the implementation of a color-capable sub-pixel resolving optofluidic microscope based on the pixel super-resolution algorithm and sequential RGB illumination, for low-cost on-chip color imaging of biological samples with sub-cellular resolution.",
"title": ""
},
{
"docid": "a522072914b33af2611896cac9613cb4",
"text": "Relation Extraction refers to the task of populating a database with tuples of the form r(e1, e2), where r is a relation and e1, e2 are entities. Distant supervision is one such technique which tries to automatically generate training examples based on an existing KB such as Freebase. This paper is a survey of some of the techniques in distant supervision which primarily rely on Probabilistic Graphical Models (PGMs).",
"title": ""
},
{
"docid": "296602c0884ea9c330a6fc8e33a7b722",
"text": "The skin is a major exposure route for many potentially toxic chemicals. It is, therefore, important to be able to predict the permeability of compounds through skin under a variety of conditions. Available skin permeability databases are often limited in scope and not conducive to developing effective models. This sparseness and ambiguity of available data prompted the use of fuzzy set theory to model and predict skin permeability. Using a previously published database containing 140 compounds, a rule-based Takagi–Sugeno fuzzy model is shown to predict skin permeability of compounds using octanol-water partition coefficient, molecular weight, and temperature as inputs. Model performance was estimated using a cross-validation approach. In addition, 10 data points were removed prior to model development for additional testing with new data. The fuzzy model is compared to a regression model for the same inputs using both R2 and root mean square error measures. The quality of the fuzzy model is also compared with previously published models. The statistical analysis demonstrates that the fuzzy model performs better than the regression model with identical data and validation protocols. The prediction quality for this model is similar to others that were published. The fuzzy model provides insights on the relationships between lipophilicity, molecular weight, and temperature on percutaneous penetration. This model can be used as a tool for rapid determination of initial estimates of skin permeability.",
"title": ""
},
{
"docid": "0153774b49121d8735cc3d33df69fc00",
"text": "A common requirement of many empirical software engineering studies is the acquisition and curation of data from software repositories. During the last few years, GitHub has emerged as a popular project hosting, mirroring and collaboration platform. GitHub provides an extensive rest api, which enables researchers to retrieve both the commits to the projects' repositories and events generated through user actions on project resources. GHTorrent aims to create a scalable off line mirror of GitHub's event streams and persistent data, and offer it to the research community as a service. In this paper, we present the project's design and initial implementation and demonstrate how the provided datasets can be queried and processed.",
"title": ""
},
{
"docid": "2450ccfdff4503fc642550a876976f10",
"text": "The purpose of this paper is to introduce sequential investment strategies that guarantee an optimal rate of growth of the capital, under minimal assumptions on the behavior of the market. The new strategies are analyzed both theoretically and empirically. The theoretical results show that the asymptotic rate of growth matches the optimal one that one could achieve with a full knowledge of the statistical properties of the underlying process generating the market, under the only assumption that the market is stationary and ergodic. The empirical results show that the performance of the proposed investment strategies measured on past NYSE and currency exchange data is solid, and sometimes even spectacular.",
"title": ""
},
{
"docid": "8ffb63dcee3bc0f541e3ec0df0d46be5",
"text": "In this paper, we show the existence of small coresets for the problems of computing k-median and kmeans clustering for points in low dimension. In other words, we show that given a point set P in <, one can compute a weighted set S ⊆ P , of size O(kε−d log n), such that one can compute the k-median/means clustering on S instead of on P , and get an (1 + ε)-approximation. As a result, we improve the fastest known algorithms for (1+ε)-approximate k-means and k-median. Our algorithms have linear running time for a fixed k and ε. In addition, we can maintain the (1+ε)-approximate k-median or k-means clustering of a stream when points are being only inserted, using polylogarithmic space and update time.",
"title": ""
},
{
"docid": "84e926e7b255a3c45e0cb515804250c3",
"text": "User-driven access control improves the coarse-grained access control of current operating systems (particularly in the mobile space) that provide only all-or-nothing access to a resource such as the camera or the current location. By granting appropriate permissions only in response to explicit user actions (for example, pressing a camera button), user-driven access control better aligns application actions with user expectations. Prior work on user-driven access control has relied in essential ways on operating system (OS) modifications to provide applications with uncompromisable access control gadgets, distinguished user interface (UI) elements that can grant access permissions. This work presents a design, implementation, and evaluation of user-driven access control that works with no OS modifications, thus making deployability and incremental adoption of the model more feasible. We develop (1) a user-level trusted library for access control gadgets, (2) static analyses to prevent malicious creation of UI events, illegal flows of sensitive information, and circumvention of our library, and (3) dynamic analyses to ensure users are not tricked into granting permissions. In addition to providing the original user-driven access control guarantees, we use static information flow to limit where results derived from sensitive sources may flow in an application.\n Our implementation targets Android applications. We port open-source applications that need interesting resource permissions to use our system. We determine in what ways user-driven access control in general and our implementation in particular are good matches for real applications. We demonstrate that our system is secure against a variety of attacks that malware on Android could otherwise mount.",
"title": ""
},
{
"docid": "2fbcd34468edf53ee08e0a76a048c275",
"text": "Recently, the introduction of the generative adversarial network (GAN) and its variants has enabled the generation of realistic synthetic samples, which has been used for enlarging training sets. Previous work primarily focused on data augmentation for semi-supervised and supervised tasks. In this paper, we instead focus on unsupervised anomaly detection and propose a novel generative data augmentation framework optimized for this task. In particular, we propose to oversample infrequent normal samples - normal samples that occur with small probability, e.g., rare normal events. We show that these samples are responsible for false positives in anomaly detection. However, oversampling of infrequent normal samples is challenging for real-world high-dimensional data with multimodal distributions. To address this challenge, we propose to use a GAN variant known as the adversarial autoencoder (AAE) to transform the high-dimensional multimodal data distributions into low-dimensional unimodal latent distributions with well-defined tail probability. Then, we systematically oversample at the 'edge' of the latent distributions to increase the density of infrequent normal samples. We show that our oversampling pipeline is a unified one: it is generally applicable to datasets with different complex data distributions. To the best of our knowledge, our method is the first data augmentation technique focused on improving performance in unsupervised anomaly detection. We validate our method by demonstrating consistent improvements across several real-world datasets.",
"title": ""
},
{
"docid": "3b7c0a822c5937ac9e4d702bb23e3432",
"text": "In a video surveillance system with static cameras, object segmentation often fails when part of the object has similar color with the background, resulting in poor performance of the subsequent object tracking. Multiple kernels have been utilized in object tracking to deal with occlusion, but the performance still highly depends on segmentation. This paper presents an innovative system, named Multiple-kernel Adaptive Segmentation and Tracking (MAST), which dynamically controls the decision thresholds of background subtraction and shadow removal around the adaptive kernel regions based on the preliminary tracking results. Then the objects are tracked for the second time according to the adaptively segmented foreground. Evaluations of both segmentation and tracking on benchmark datasets and our own recorded video sequences demonstrate that the proposed method can successfully track objects in similar-color background and/or shadow areas with favorable segmentation performance.",
"title": ""
},
{
"docid": "9e766871b172f7a752c8af629bd10856",
"text": "A fundamental computational limit on automated reasoning and its effect on Knowledge Representation is examined. Basically, the problem is that it can be more difficult to reason correctly ;Nith one representationallanguage than with another and, moreover, that this difficulty increases dramatically as the expressive power of the language increases. This leads to a tradeoff between the expressiveness of a representational language and its computational tractability. Here we show that this tradeoff can be seen to underlie the differences among a number of existing representational formalisms, in addition to motivating many of the current research issues in Knowledge Representation.",
"title": ""
},
{
"docid": "37edb948f37baa14aff4843d3f83e69b",
"text": "This article concerns the manner in which group interaction during focus groups impacted upon the data generated in a study of adolescent sexual health. Twenty-nine group interviews were conducted with secondary school pupils in Ireland, and data were subjected to a qualitative analysis. In exploring the relationship between method and theory generation, we begin by focusing on the ethnographic potential within group interviews. We propose that at times during the interviews, episodes of acting-out, or presenting a particular image in the presence of others, can be highly revealing in attempting to understand the normative rules embedded in the culture from which participants are drawn. However, we highlight a specific problem with distinguishing which parts of the group interview are a valid representation of group processes and which parts accurately reflect individuals' retrospective experiences of reality. We also note that at various points in the interview, focus groups have the potential to reveal participants' vulnerabilities. In addition, group members themselves can challenge one another on how aspects of their sub-culture are represented within the focus group, in a way that is normally beyond reach within individual interviews. The formation and composition of focus groups, particularly through the clustering of like-minded individuals, can affect the dominant views being expressed within specific groups. While focus groups have been noted to have an educational and transformative potential, we caution that they may also be a source of inaccurate information, placing participants at risk. Finally, the opportunities that focus groups offer in enabling researchers to cross-check the trustworthiness of data using a post-interview questionnaire are considered. We conclude by arguing that although far from flawless, focus groups are a valuable method for gathering data about health issues.",
"title": ""
},
{
"docid": "1796b8d91de88303571cc6f3f66b580b",
"text": "In this paper it is shown that bifilar of a Quadrifilar Helix Antenna (QHA) when designed in side-fed configuration at a given diameter and length of helical arm, effectively becomes equivalent to combination of a loop and a dipole antenna. The vertical and horizontal electric fields caused by these equivalent antennas can be made to vary by changing the turn angle of the bifilar. It is shown how the variation in horizontal and vertical electric field dominance is seen until perfect circular polarization is achieved when two fields are equal at a certain turn angle where area of the loop equals product of pitch of helix and radian length i.e. equivalent dipole length. The antenna is low profile and does not require ground plane and thus can be used in high speed aerodynamic and platform bodies made of composite material where metallic ground is unavailable. Additionally not requiring ground plane increases the isolation between the antennas with stable radiation pattern and hence can be used in MIMO systems.",
"title": ""
},
{
"docid": "0bc68769c263973309b7f19a8bc7d06d",
"text": "The publication of a scholarly book is always the conjunction of an author’s desire (or need) to disseminate their experience and knowledge and the interest or expectations of a potential community of readers to gain benefit from the publication itself. Michael Piotrowski has indeed managed to optimize this relation by bringing to the public a compendium of information that I think has been heavily awaited by many scholars having to deal with corpora of historical texts. The book covers most topics related to the acquisition, encoding, and annotation of historical textual data, seen from the point of view of their linguistic content. As such, it does not address issues related, for instance, to scholarly editions of these texts, but conveys a wealth of information on the various aspects where recent developments in language technology may help digital humanities projects to be aware of the current state of the art in the field.",
"title": ""
},
{
"docid": "52d4f95b6dc6da7d5dd54003b0bc5fbf",
"text": "Leadership is a process directing to a target of which followers, the participators are shared. For this reason leadership has an important effect on succeeding organizational targets. More importance is given to the leadership studies in order to increase organizational success each day. One of the leadership researches that attracts attention recently is spiritual leadership. Spiritual leadership (SL) is important for imposing ideal to the followers and giving meaning to the works they do. Focusing on SL that has recently taken its place in leadership literature, this study looks into what extend faculty members teaching at Faculty of Education display SL qualities. The study is in descriptive scanning model. 1819 students studying at Kocaeli University Faculty of Education in 2009-2010 academic year constitute the universe of the study. Observing leadership qualities takes long time. Therefore, the sample of the study is determined by deliberate sampling method and includes 432 students studying at the last year of the faculty. Data regarding faculty members' SL qualities were collected using a questionnaire adapted from Fry's (2003) 'Spiritual Leadership Scale'. Consequently, university students think that academic stuff shows the features of SL and its sub dimensions in a medium level. According to students, academicians show attitudes related to altruistic love rather than faith and vision. It is found that faculty members couldn't display leadership qualities enough according to the students at the end of the study. © 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "3b5e584b95ae31ff94be85d7dbea1ccb",
"text": "Due to the fact that no NP-complete problem can be solved in polynomial time (unless P=NP), many approximability results (both positive and negative) of NP-hard optimization problems have appeared in the technical literature. In this compendium, we collect a large number of these results. ● Introduction ❍ NPO Problems: Definitions and Preliminaries ❍ Approximate Algorithms and Approximation Classes ❍ Completeness in Approximation Classes ❍ A list of NPO problems ❍ Improving the compendium ● Graph Theory ❍ Covering and Partitioning ❍ Subgraphs and Supergraphs ❍ Vertex Ordering file:///E|/COMPEND/COMPED19/COMPENDI.HTM (1 of 2) [19/1/2003 1:36:58] A compendium of NP optimization problems ❍ Isoand Other Morphisms ❍ Miscellaneous ● Network Design ❍ Spanning Trees ❍ Cuts and Connectivity ❍ Routing Problems ❍ Flow Problems ❍ Miscellaneous ● Sets and Partitions ❍ Covering, Hitting, and Splitting ❍ Weighted Set Problems ● Storage and Retrieval ❍ Data Storage ❍ Compression and Representation ❍ Miscellaneous ● Sequencing and Scheduling ❍ Sequencing on One Processor ❍ Multiprocessor Scheduling ❍ Shop Scheduling ❍ Miscellaneous ● Mathematical Programming ● Algebra and Number Theory ● Games and Puzzles ● Logic ● Program Optimization ● Miscellaneous ● References ● Index ● About this document ... Viggo Kann Mon Apr 21 13:07:14 MET DST 1997 file:///E|/COMPEND/COMPED19/COMPENDI.HTM (2 of 2) [19/1/2003 1:36:58]",
"title": ""
},
{
"docid": "0b7142ade987ca6f2683fc3fe6179fcb",
"text": "The Psychophysics Toolbox is a software package that supports visual psychophysics. Its routines provide an interface between a high-level interpreted language (MATLAB on the Macintosh) and the video display hardware. A set of example programs is included with the Toolbox distribution.",
"title": ""
},
{
"docid": "53c836280ad99b28c892ef85f31a5985",
"text": "This paper focuses on the design of 1 bit full adder circuit using Gate Diffusion Input Logic. The proposed adder schematics are developed using DSCH2 CAD tool, and their layouts are generated with Microwind 3 VLSI CAD tool. A 1 bit adder circuits are analyzed using standard CMOS 120nm features with corresponding voltage of 1.2V. The Simulated results of the proposed adder is compared with those of Pass transistor, Transmission Function, and CMOS based adder circuits. The proposed adder dissipates low power and responds faster.",
"title": ""
},
{
"docid": "55bdb8b6f4dd3dc836e9751ae8d721e3",
"text": "Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "6bc31257bfbcc9531a3acf1ec738c790",
"text": "BACKGROUND\nThe interaction of depression and anesthesia and surgery may result in significant increases in morbidity and mortality of patients. Major depressive disorder is a frequent complication of surgery, which may lead to further morbidity and mortality.\n\n\nLITERATURE SEARCH\nSeveral electronic data bases, including PubMed, were searched pairing \"depression\" with surgery, postoperative complications, postoperative cognitive impairment, cognition disorder, intensive care unit, mild cognitive impairment and Alzheimer's disease.\n\n\nREVIEW OF THE LITERATURE\nThe suppression of the immune system in depressive disorders may expose the patients to increased rates of postoperative infections and increased mortality from cancer. Depression is commonly associated with cognitive impairment, which may be exacerbated postoperatively. There is evidence that acute postoperative pain causes depression and depression lowers the threshold for pain. Depression is also a strong predictor and correlate of chronic post-surgical pain. Many studies have identified depression as an independent risk factor for development of postoperative delirium, which may be a cause for a long and incomplete recovery after surgery. Depression is also frequent in intensive care unit patients and is associated with a lower health-related quality of life and increased mortality. Depression and anxiety have been widely reported soon after coronary artery bypass surgery and remain evident one year after surgery. They may increase the likelihood for new coronary artery events, further hospitalizations and increased mortality. Morbidly obese patients who undergo bariatric surgery have an increased risk of depression. Postoperative depression may also be associated with less weight loss at one year and longer. The extent of preoperative depression in patients scheduled for lumbar discectomy is a predictor of functional outcome and patient's dissatisfaction, especially after revision surgery. General postoperative mortality is increased.\n\n\nCONCLUSIONS\nDepression is a frequent cause of morbidity in surgery patients suffering from a wide range of conditions. Depression may be identified through the use of Patient Health Questionnaire-9 or similar instruments. Counseling interventions may be useful in ameliorating depression, but should be subject to clinical trials.",
"title": ""
},
{
"docid": "4b5d5d4da56ad916afdad73cc0180cb5",
"text": "This work proposes a substrate integrated waveguide (SIW) power divider employing the Wilkinson configuration for improving the isolation performance of conventional T-junction SIW power dividers. Measurement results at 15GHz show that the isolation (S23, S32) between output ports is about 17 dB and the output return losses (S22, S33) are about 14.5 dB, respectively. The Wilkinson-type performance has been greatly improved from those (7.0 dB ∼ 8.0 dB) of conventional T-junction SIW power dividers. The measured input return loss (23 dB) and average insertion loss (3.9 dB) are also improved from those of conventional ones. The proposed Wilkinson SIW divider will play an important role in high performance SIW circuits involving power divisions.",
"title": ""
}
] |
scidocsrr
|
29a0a7020da68a5c1dc9988ca8da05f8
|
A Review of Experiences with Reliable Multicast
|
[
{
"docid": "383cfad43187d0cca06b4211548e4f5c",
"text": "Research can rarely be performed on large-scale, distributed systems at the level of thousands of workstations. In this paper, we describe the motivating constraints, design principles, and architecture for an extensible, distributed system operating in such an environment. The constraints include continuous operation, dynamic system evolution, and integration with extant systems. The Information Bus, our solution, is a novel synthesis of four design principles: core communication protocols have minimal semantics, objects are self-describing, types can be dynamically defined, and communication is anonymous. The current implementation provides both flexibility and high performance, and has been proven in several commercial environments, including integrated circuit fabrication plants and brokerage/trading floors.",
"title": ""
}
] |
[
{
"docid": "4f59e141ffc88aaed620ca58522e8f03",
"text": "Undergraduate volunteers rated a series of words for pleasantness while hearing a particular background music. The subjects in Experiment 1 received, immediately or after a 48-h delay, an unexpected word-recall test in one of the following musical cue contexts: same cue (S), different cue (D), or no cue (N). For immediate recall, context dependency (S-D) was significant but same-cue facilitation (S-N) was not. No cue effects at all were found for delayed recall, and there was a significant interaction between cue and retention interval. A similar interaction was also found in Experiment 3, which was designed to rule out an alternative explanation with respect to distraction. When the different musical selection was changed specifically in either tempo or form (genre), only pieces having an altered tempo produced significantly lower immediate recall compared with the same pieces (Experiment 2). The results support a stimulus generalization view of music-dependent memory.",
"title": ""
},
{
"docid": "dfe502f728d76f9b4294f725eca78413",
"text": "SUMMARY This paper reports work being carried out under the AMODEUS project (BRA 3066). The goal of the project is to develop interdisciplinary approaches to studying human-computer interaction and to move towards applying the results to the practicalities of design. This paper describes one of the approaches the project is taking to represent design-Design Space Analysis. One of its goals is help us bridge from relatively theoretical concerns to the practicalities of design. Design Space Analysis is a central component of a framework for representing the design rationale for designed artifacts. Our current work focusses more specifically on the design of user interfaces. A Design Space Analysis is represented using the QOC notation, which consists of Questions identifying key design issues, Options providing possible answers to the Questions, and Criteria for assessing and comparing the Options. In this paper we give an overview of our approach, some examples of the research issues we are currently tackling and an illustration of its role in helping to integrate the work of some of our project partners with design considerations.",
"title": ""
},
{
"docid": "1384f95f0f66e64af28e91f8c99a12e8",
"text": "Nature-inspired computing has been a hot topic in scientific and engineering fields in recent years. Inspired by the shallow water wave theory, the paper presents a novel metaheuristic method, named water wave optimization (WWO), for global optimization problems. We show how the beautiful phenomena of water waves, such as propagation, refraction, and breaking, can be used to derive effective mechanisms for searching in a high-dimensional solution space. In general, the algorithmic framework of WWO is simple, and easy to implement with a small-size population and only a few control parameters. We have tested WWO on a diverse set of benchmark problems, and applied WWO to a real-world high-speed train scheduling problem in China. The computational results demonstrate that WWO is very competitive with state-of-the-art evolutionary algorithms including invasive weed optimization (IWO), biogeography-based optimization (BBO), bat algorithm (BA), etc. The new metaheuristic is expected to have wide applications in real-world engineering optimization problems. & 2014 Elsevier Ltd. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-SA license (http://creativecommons.org/licenses/by-nc-sa/3.0/).",
"title": ""
},
{
"docid": "ba72cbe165b4dc5855498f4dc5c0eb71",
"text": "Meta-heuristic algorithms prove to be competent in outperforming deterministic algorithms for real-world optimization problems. Firefly algorithm is one such recently developed algorithm inspired by the flashing behavior of fireflies. In this work, a detailed formulation and explanation of the Firefly algorithm implementation is given. Later Firefly algorithm is verified using six unimodal engineering optimization problems reported in the specialized literature.",
"title": ""
},
{
"docid": "1eee6741c5f303763a45fccf2aebe776",
"text": "This report summarizes a variety of the most useful and commonly applied methods for obtaining Dempster-Shafer structures, and their mathematical kin probability boxes, from empirical information or theoretical knowledge. The report includes a review of the aggregation methods for handling agreement and conflict when multiple such objects are obtained from different sources. * The work described in this report was performed for Sandia National Laboratories under Contract No. 19094",
"title": ""
},
{
"docid": "90acc4ae44da11db8fbcae5cfa70bf10",
"text": "Capsules as well as dynamic routing between them are most recently proposed structures for deep neural networks. A capsule groups data into vectors or matrices as poses rather than conventional scalars to represent specific properties of target instance. Besides of pose, a capsule should be attached with a probability (often denoted as activation) for its presence. The dynamic routing helps capsules achieve more generalization capacity with many fewer model parameters. However, the bottleneck that prevents widespread applications of capsule is the expense of computation during routing. To address this problem, we generalize existing routing methods within the framework of weighted kernel density estimation, and propose two fast routing methods with different optimization strategies. Our methods prompt the time efficiency of routing by nearly 40% with negligible performance degradation. By stacking a hybrid of convolutional layers and capsule layers, we construct a network architecture to handle inputs at a resolution of 64× 64 pixels. The proposed models achieve a parallel performance with other leading methods in multiple benchmarks.",
"title": ""
},
{
"docid": "71404f5500b0e173c91ac1abdf5d1c88",
"text": "Understanding the navigational behavior of website visitors is a significant factor of success in the emerging business models of electronic commerce and even mobile commerce. In this paper, we describe the different approaches of mining web navigation pattern.",
"title": ""
},
{
"docid": "15f2aca611a24b4932e70b472a8ec7e3",
"text": "Hashing is critical for high performance computer architecture. Hashing is used extensively in hardware applications, such as page tables, for address translation. Bit extraction and exclusive ORing hashing “methods” are two commonly used hashing functions for hardware applications. There is no study of the performance of these functions and no mention anywhere of the practical performance of the hashing functions in comparison with the theoretical performance prediction of hashing schemes. In this paper, we show that, by choosing hashing functions at random from a particular class, called H3, of hashing functions, the analytical performance of hashing can be achieved in practice on real-life data. Our results about the expected worst case performance of hashing are of special significance, as they provide evidence for earlier theoretical predictions. Index Terms —Hashing in hardware, high performance computer architecture, page address translation, signature functions, high speed information storage and retrieval.",
"title": ""
},
{
"docid": "736b98a5b6a86db837362ab2c7086484",
"text": "This is an in-vitro pilot study which established the effect of radiofrequency radiation (RFR) from 2.4 GHz laptop antenna on human semen. Ten samples of the semen, collected from donors between the ages of 20 and 30 years were exposed when the source of the RFR was in active mode. Sequel to the exposure, both the exposed samples and another ten unexposed samples from same donors were analysed for sperm concentration, motility and morphology grading. A test of significance between results of these semen parameters using Mann-Whitney Utest at 0.05 level of significance showed a significant effect of RFR exposure on the semen parameters considered.",
"title": ""
},
{
"docid": "a520bf66f1b54a7444f2cbe3f2da8000",
"text": "In this work we study the problem of Intrusion Detection is sensor networks and we propose a lightweight scheme that can be applied to such networks. Its basic characteristic is that nodes monitor their neighborhood and collaborate with their nearest neighbors to bring the network back to its normal operational condition. We emphasize in a distributed approach in which, even though nodes don’t have a global view, they can still detect an intrusion and produce an alert. We apply our design principles for the blackhole and selective forwarding attacks by defining appropriate rules that characterize malicious behavior. We also experimentally evaluate our scheme to demonstrate its effectiveness in detecting the afore-mentioned attacks.",
"title": ""
},
{
"docid": "95306b34302c35b3c38fd5141e472896",
"text": "We used the machine learning technique of Li et al. (PRL 114, 2015) for molecular dynamics simulations. Atomic configurations were described by feature matrix based on internal vectors, and linear regression was used as a learning technique. We implemented this approach in the LAMMPS code. The method was applied to crystalline and liquid aluminum and uranium at different temperatures and densities, and showed the highest accuracy among different published potentials. Phonon density of states, entropy and melting temperature of aluminum were calculated using this machine learning potential. The results are in excellent agreement with experimental data and results of full ab initio calculations.",
"title": ""
},
{
"docid": "3e28cbfc53f6c42bb0de2baf5c1544aa",
"text": "Cloud computing is an emerging paradigm which allows the on-demand delivering of software, hardware, and data as services. As cloud-based services are more numerous and dynamic, the development of efficient service provisioning policies become increasingly challenging. Game theoretic approaches have shown to gain a thorough analytical understanding of the service provisioning problem.\n In this paper we take the perspective of Software as a Service (SaaS) providers which host their applications at an Infrastructure as a Service (IaaS) provider. Each SaaS needs to comply with quality of service requirements, specified in Service Level Agreement (SLA) contracts with the end-users, which determine the revenues and penalties on the basis of the achieved performance level. SaaS providers want to maximize their revenues from SLAs, while minimizing the cost of use of resources supplied by the IaaS provider. Moreover, SaaS providers compete and bid for the use of infrastructural resources. On the other hand, the IaaS wants to maximize the revenues obtained providing virtualized resources. In this paper we model the service provisioning problem as a Generalized Nash game, and we propose an efficient algorithm for the run time management and allocation of IaaS resources to competing SaaSs.",
"title": ""
},
{
"docid": "13a06fb1a1bdf0df0043fe10f74443e1",
"text": "Coping with the extreme growth of the number of users is one of the main challenges for the future IEEE 802.11 networks. The high interference level, along with the conventional standardized carrier sensing approaches, will degrade the network performance. To tackle these challenges, the Dynamic Sensitivity Control (DSC) and the BSS Color scheme are considered in IEEE 802.11ax and IEEE 802.11ah, respectively. The main purpose of these schemes is to enhance the network throughput and improve the spectrum efficiency in dense networks. In this paper, we evaluate the DSC and the BSS Color scheme along with the PARTIAL-AID (PAID) feature introduced in IEEE 802.11ac, in terms of throughput and fairness. We also, exploit the performance when the aforementioned techniques are combined. The simulations show a significant gain in total throughput when these techniques are applied.",
"title": ""
},
{
"docid": "63429f5eebc2434660b0073b802127c2",
"text": "Body Area Networks are unique in that the large-scale mobility of users allows the network itself to travel across a diverse range of operating domains or even to enter new and unknown environments. This network mobility is unlike node mobility in that sensed changes in inter-network interference level may be used to identify opportunities for intelligent inter-networking, for example, by merging or splitting from other networks, thus providing an extra degree of freedom. This paper introduces the concept of context-aware bodynets for interactive environments using inter-network interference sensing. New ideas are explored at both the physical and link layers with an investigation based on a 'smart' office environment. A series of carefully controlled measurements of the mesh interconnectivity both within and between an ambulatory body area network and a stationary desk-based network were performed using 2.45 GHz nodes. Received signal strength and carrier to interference ratio time series for selected node to node links are presented. The results provide an insight into the potential interference between the mobile and static networks and highlight the possibility for automatic identification of network merging and splitting opportunities.",
"title": ""
},
{
"docid": "b34eb302108ffd515ed9fc896fa7015f",
"text": "Recent magnetoencephalography (MEG) and functional magnetic resonance imaging studies of human auditory cortex are pointing to brain areas on lateral Heschl's gyrus as the 'pitch-processing center'. Here we describe results of a combined MEG-psychophysical study designed to investigate the timing of the formation of the percept of pitch and the generality of the hypothesized 'pitch-center'. We compared the cortical and behavioral responses to Huggins pitch (HP), a stimulus requiring binaural processing to elicit a pitch percept, with responses to tones embedded in noise (TN)-perceptually similar but physically very different signals. The stimuli were crafted to separate the electrophysiological responses to onset of the pitch percept from the onset of the initial stimulus. Our results demonstrate that responses to monaural pitch stimuli are affected by cross-correlational processes in the binaural pathway. Additionally, we show that MEG illuminates processes not simply observable in behavior. Crucially, the MEG data show that, although physically disparate, both HP and TN are mapped onto similar representations by 150 ms post-onset, and provide critical new evidence that the 'pitch onset response' reflects central pitch mechanisms, in agreement with models postulating a single, central pitch extractor.",
"title": ""
},
{
"docid": "e4f31c3e7da3ad547db5fed522774f0e",
"text": "Surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, the Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. To reconstruct detailed models in limited memory, we solve this Poisson formulation efficiently using a streaming framework. Specifically, we introduce a multilevel streaming representation, which enables efficient traversal of a sparse octree by concurrently advancing through multiple streams, one per octree level. Remarkably, for our reconstruction application, a sufficiently accurate solution to the global linear system is obtained using a single iteration of cascadic multigrid, which can be evaluated within a single multi-stream pass. Finally, we explore the application of Poisson reconstruction to the setting of multi-view stereo, to reconstruct detailed 3D models of outdoor scenes from collections of Internet images.\n This is joint work with Michael Kazhdan, Matthew Bolitho, and Randal Burns (Johns Hopkins University), and Michael Goesele, Noah Snavely, Brian Curless, and Steve Seitz (University of Washington).",
"title": ""
},
{
"docid": "30aaf753d3ec72f07d4838de391524ca",
"text": "The present study was aimed to determine the effect on liver, associated oxidative stress, trace element and vitamin alteration in dogs with sarcoptic mange. A total of 24 dogs with clinically established diagnosis of sarcoptic mange, divided into two groups, severely infested group (n=9) and mild/moderately infested group (n=15), according to the extent of skin lesions caused by sarcoptic mange and 6 dogs as control group were included in the present study. In comparison to healthy control hemoglobin, PCV, and TEC were significantly (P<0.05) decreased in dogs with sarcoptic mange however, significant increase in TLC along with neutrophilia and lymphopenia was observed only in severely infested dogs. The albumin, glucose and cholesterol were significantly (P<0.05) decreased and globulin, ALT, AST and bilirubin were significantly (P<0.05) increased in severely infested dogs when compared to other two groups. Malondialdehyde (MDA) levels were significantly (P<0.01) higher in dogs with sarcoptic mange, with levels highest in severely infested groups. Activity of superoxide dismutase (SOD) (P<0.05) and catalase were significantly (P<0.01) lower in sarcoptic infested dogs when compared with the healthy control group. Zinc and copper levels in dogs with sarcoptic mange were significantly (P<0.05) lower when compared with healthy control group with the levels lowest in severely infested group. Vitamin A and vitamin C levels were significantly (P<0.05) lower in sarcoptic infested dogs when compared to healthy control. From the present study, it was concluded that sarcoptic mange in dogs affects the liver and the infestation is associated with oxidant/anti-oxidant imbalance, significant alteration in trace elements and vitamins.",
"title": ""
},
{
"docid": "41da3bc399664e62b4e07006893cdd50",
"text": "Cloud storage service is one of cloud services where cloud service provider can provide storage space to customers. Because cloud storage service has many advantages which include convenience, high computation and capacity, it attracts the user to outsource data in the cloud. However, the user outsources data directly in cloud storage service that is unsafe when outsourcing data is sensitive for the user. Therefore, ciphertext-policy attribute-based encryption is a promising cryptographic solution in cloud environment, which can be drawn up for access control by the data owner to define access policy. Unfortunately, an outsourced architecture applied with the attribute-based encryption introduces many challenges in which one of the challenges is revocation. The issue is a threat to data security in the data owner. In this paper, we survey related studies in cloud data storage with revocation and define their requirements. Then we explain and analyze four representative approaches. Finally, we provide some topics for future research",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
},
{
"docid": "dd9ff422ede7f5df297fa29fdef49db3",
"text": "Courts have articulated a number of legal tests to distinguish corporate transactions that have a legitimate business or economic purpose from those carried out largely, if not solely, for favorable tax treatment. We outline an approach to analyzing the economic substance of corporate transactions based on the property rights theory of the firm and describe its application in two recent tax cases.",
"title": ""
}
] |
scidocsrr
|
4cb70dbe54b21485773023fd942ae7de
|
Service-Dominant Strategic Sourcing: Value Creation Versus Cost Saving
|
[
{
"docid": "dd62fd669d40571cc11d64789314dba1",
"text": "It took the author 30 years to develop the Viable System Model, which sets out to explain how systems are viable – that is, capable of independent existence. He wanted to elucidate the laws of viability in order to facilitate the management task, and did so in a stream of papers and three (of his ten) books. Much misunderstanding about the VSM and its use seems to exist; especially its methodological foundations have been largely forgotten, while its major results have hardly been noted. This paper reflects on the history, nature and present status of the VSM, without seeking once again to expound the model in detail or to demonstrate its validity. It does, however, provide a synopsis, present the methodology and confront some highly contentious issues about both the managerial and scientific paradigms.",
"title": ""
}
] |
[
{
"docid": "9b0f286b03b3d81942747a98ac0e8817",
"text": "Automated recommendations for next tracks to listen to or to include in a playlist are a common feature on modern music platforms. Correspondingly, a variety of algorithmic approaches for determining tracks to recommend have been proposed in academic research. The most sophisticated among them are often based on conceptually complex learning techniques which can also require substantial computational resources or special-purpose hardware like GPUs. Recent research, however, showed that conceptually more simple techniques, e.g., based on nearest-neighbor schemes, can represent a viable alternative to such techniques in practice.\n In this paper, we describe a hybrid technique for next-track recommendation, which was evaluated in the context of the ACM RecSys 2018 Challenge. A combination of nearest-neighbor techniques, a standard matrix factorization algorithm, and a small set of heuristics led our team KAENEN to the 3rd place in the \"creative\" track and the 7th one in the \"main\" track, with accuracy results only a few percent below the winning teams. Given that offline prediction accuracy is only one of several possible quality factors in music recommendation, practitioners have to validate if slight accuracy improvements truly justify the use of highly complex algorithms in real-world applications.",
"title": ""
},
{
"docid": "4174c1d49ff8755c6b82c2b453918d29",
"text": "Top-k error is currently a popular performance measure on large scale image classification benchmarks such as ImageNet and Places. Despite its wide acceptance, our understanding of this metric is limited as most of the previous research is focused on its special case, the top-1 error. In this work, we explore two directions that shed more light on the top-k error. First, we provide an in-depth analysis of established and recently proposed single-label multiclass methods along with a detailed account of efficient optimization algorithms for them. Our results indicate that the softmax loss and the smooth multiclass SVM are surprisingly competitive in top-k error uniformly across all k, which can be explained by our analysis of multiclass top-k calibration. Further improvements for a specific k are possible with a number of proposed top-k loss functions. Second, we use the top-k methods to explore the transition from multiclass to multilabel learning. In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant. Finally, our contribution of efficient algorithms for training with the considered top-k and multilabel loss functions is of independent interest.",
"title": ""
},
{
"docid": "e6dcc8f80b5b6528531b7f6e617cd633",
"text": "Over 2 million military and civilian personnel per year (over 1 million in the United States) are occupationally exposed, respectively, to jet propulsion fuel-8 (JP-8), JP-8 +100 or JP-5, or to the civil aviation equivalents Jet A or Jet A-1. Approximately 60 billion gallon of these kerosene-based jet fuels are annually consumed worldwide (26 billion gallon in the United States), including over 5 billion gallon of JP-8 by the militaries of the United States and other NATO countries. JP-8, for example, represents the largest single chemical exposure in the U.S. military (2.53 billion gallon in 2000), while Jet A and A-1 are among the most common sources of nonmilitary occupational chemical exposure. Although more recent figures were not available, approximately 4.06 billion gallon of kerosene per se were consumed in the United States in 1990 (IARC, 1992). These exposures may occur repeatedly to raw fuel, vapor phase, aerosol phase, or fuel combustion exhaust by dermal absorption, pulmonary inhalation, or oral ingestion routes. Additionally, the public may be repeatedly exposed to lower levels of jet fuel vapor/aerosol or to fuel combustion products through atmospheric contamination, or to raw fuel constituents by contact with contaminated groundwater or soil. Kerosene-based hydrocarbon fuels are complex mixtures of up to 260+ aliphatic and aromatic hydrocarbon compounds (C(6) -C(17+); possibly 2000+ isomeric forms), including varying concentrations of potential toxicants such as benzene, n-hexane, toluene, xylenes, trimethylpentane, methoxyethanol, naphthalenes (including polycyclic aromatic hydrocarbons [PAHs], and certain other C(9)-C(12) fractions (i.e., n-propylbenzene, trimethylbenzene isomers). While hydrocarbon fuel exposures occur typically at concentrations below current permissible exposure limits (PELs) for the parent fuel or its constituent chemicals, it is unknown whether additive or synergistic interactions among hydrocarbon constituents, up to six performance additives, and other environmental exposure factors may result in unpredicted toxicity. While there is little epidemiological evidence for fuel-induced death, cancer, or other serious organic disease in fuel-exposed workers, large numbers of self-reported health complaints in this cohort appear to justify study of more subtle health consequences. A number of recently published studies reported acute or persisting biological or health effects from acute, subchronic, or chronic exposure of humans or animals to kerosene-based hydrocarbon fuels, to constituent chemicals of these fuels, or to fuel combustion products. This review provides an in-depth summary of human, animal, and in vitro studies of biological or health effects from exposure to JP-8, JP-8 +100, JP-5, Jet A, Jet A-1, or kerosene.",
"title": ""
},
{
"docid": "79079ee1e352b997785dc0a85efed5e4",
"text": "Automatic recognition of the historical letters (XI-XVIII centuries) carved on the stoned walls of St.Sophia cathedral in Kyiv (Ukraine) was demonstrated by means of capsule deep learning neural network. It was applied to the image dataset of the carved Glagolitic and Cyrillic letters (CGCL), which was assembled and pre-processed recently for recognition and prediction by machine learning methods. CGCL dataset contains >4000 images for glyphs of 34 letters which are hardly recognized by experts even in contrast to notMNIST dataset with the better images of 10 letters taken from different fonts. The capsule network was applied for both datasets in three regimes: without data augmentation, with lossless data augmentation, and lossy data augmentation. Despite the much worse quality of CGCL dataset and extremely low number of samples (in comparison to notMNIST dataset) the capsule network model demonstrated much better results than the previously used convolutional neural network (CNN). The training rate for capsule network model was 5-6 times higher than for CNN. The validation accuracy (and validation loss) was higher (lower) for capsule network model than for CNN without data augmentation even. The area under curve (AUC) values for receiver operating characteristic (ROC) were also higher for the capsule network model than for CNN model: 0.88-0.93 (capsule network) and 0.50 (CNN) without data augmentation, 0.91-0.95 (capsule network) and 0.51 (CNN) with lossless data augmentation, and similar results of 0.91-0.93 (capsule network) and 0.9 (CNN) in the regime of lossless data augmentation only. The confusion matrixes were much better for capsule network than for CNN model and gave the much lower type I (false positive) and type II (false negative) values in all three regimes of data augmentation. These results supports the previous claims that capsule-like networks allow to reduce error rates not only on MNIST digit dataset, but on the other notMNIST letter dataset and the more complex CGCL handwriting graffiti letter dataset also. Moreover, capsule-like networks allow to reduce training set sizes to 180 images even like in this work, and they are considerably better than CNNs on the highly distorted and incomplete letters even like CGCL handwriting graffiti. Keywords— machine learning, deep learning, capsule neural network, stone carving dataset, notMNIST, data augmentation",
"title": ""
},
{
"docid": "5ec1cff52a55c5bd873b5d0d25e0456b",
"text": "This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on WordNet. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.",
"title": ""
},
{
"docid": "0a58aa0c5dff94efa183fcf6fb7952f6",
"text": "When people explore new environments they often use landmarks as reference points to help navigate and orientate themselves. This research paper examines how spatial datasets can be used to build a system for use in an urban environment which functions as a city guide, announcing Features of Interest (FoI) as they become visible to the user (not just proximal), as the user moves freely around the city. Visibility calculations for the FoIs were pre-calculated based on a digital surface model derived from LIDAR (Light Detection and Ranging) data. The results were stored in a textbased relational database management system (RDBMS) for rapid retrieval. All interaction between the user and the system was via a speech-based interface, allowing the user to record and request further information on any of the announced FoI. A prototype system, called Edinburgh Augmented Reality System (EARS) , was designed, implemented and field tested in order to assess the effectiveness of these ideas. The application proved to be an innovating, ‘non-invasive’ approach to augmenting the user’s reality",
"title": ""
},
{
"docid": "4d1be9aebf7534cce625b95bde4696c6",
"text": "BlockChain (BC) has attracted tremendous attention due to its immutable nature and the associated security and privacy benefits. BC has the potential to overcome security and privacy challenges of Internet of Things (IoT). However, BC is computationally expensive, has limited scalability and incurs significant bandwidth overheads and delays which are not suited to the IoT context. We propose a tiered Lightweight Scalable BC (LSB) that is optimized for IoT requirements. We explore LSB in a smart home setting as a representative example for broader IoT applications. Low resource devices in a smart home benefit from a centralized manager that establishes shared keys for communication and processes all incoming and outgoing requests. LSB achieves decentralization by forming an overlay network where high resource devices jointly manage a public BC that ensures end-to-end privacy and security. The overlay is organized as distinct clusters to reduce overheads and the cluster heads are responsible for managing the public BC. LSB incorporates several optimizations which include algorithms for lightweight consensus, distributed trust and throughput management. Qualitative arguments demonstrate that LSB is resilient to several security attacks. Extensive simulations show that LSB decreases packet overhead and delay and increases BC scalability compared to relevant baselines.",
"title": ""
},
{
"docid": "5c29083624be58efa82b4315976f8dc2",
"text": "This paper presents a structured ordinal measure method for video-based face recognition that simultaneously lear ns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space . The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization metho d is employed to handle the discrete and low-rank constraints , yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition ra tes using fewer features and samples.",
"title": ""
},
{
"docid": "471af6726ec78126fcf46f4e42b666aa",
"text": "A new thermal tuning circuit for optical ring modulators enables demonstration of an optical chip-to-chip link for the first time with monolithically integrated photonic devices in a commercial 45nm SOI process, without any process changes. The tuning circuit uses independent 1/0 level-tracking and 1/0 bit counting to remain resilient against laser self-heating transients caused by non-DC-balanced transmit data. A 30fJ/bit transmitter and 374fJ/bit receiver with 6μApk-pk photocurrent sensitivity complete the 5Gb/s link. The thermal tuner consumes 275fJ/bit and achieves a 600 GHz tuning range with a heater tuning efficiency of 3.8μW/GHz.",
"title": ""
},
{
"docid": "24a10176ec2367a6a0b5333d57b894b8",
"text": "Automated classification of biological cells according to their 3D morphology is highly desired in a flow cytometer setting. We have investigated this possibility experimentally and numerically using a diffraction imaging approach. A fast image analysis software based on the gray level co-occurrence matrix (GLCM) algorithm has been developed to extract feature parameters from measured diffraction images. The results of GLCM analysis and subsequent classification demonstrate the potential for rapid classification among six types of cultured cells. Combined with numerical results we show that the method of diffraction imaging flow cytometry has the capacity as a platform for high-throughput and label-free classification of biological cells.",
"title": ""
},
{
"docid": "9edfedc5a1b17481ee8c16151cf42c88",
"text": "Nevus comedonicus is considered a genodermatosis characterized by the presence of multiple groups of dilated pilosebaceous orifices filled with black keratin plugs, with sharply unilateral distribution mostly on the face, neck, trunk, upper arms. Lesions can appear at any age, frequently before the age of 10 years, but they are usually present at birth. We present a 2.7-year-old girl with a very severe form of nevus comedonicus. She exhibited lesions located initially at the left side of the body with a linear characteristic, following Blascko lines T1/T2, T5, T7, S1 /S2, but progressively developed lesions on the right side of the scalp and left gluteal area.",
"title": ""
},
{
"docid": "bdbd3d65c79e4f22d2e85ac4137ee67a",
"text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",
"title": ""
},
{
"docid": "3e9a214856235ef36a4dd2e9684543b7",
"text": "Leaf area index (LAI) is a key biophysical variable that can be used to derive agronomic information for field management and yield prediction. In the context of applying broadband and high spatial resolution satellite sensor data to agricultural applications at the field scale, an improved method was developed to evaluate commonly used broadband vegetation indices (VIs) for the estimation of LAI with VI–LAI relationships. The evaluation was based on direct measurement of corn and potato canopies and on QuickBird multispectral images acquired in three growing seasons. The selected VIs were correlated strongly with LAI but with different efficiencies for LAI estimation as a result of the differences in the stabilities, the sensitivities, and the dynamic ranges. Analysis of error propagation showed that LAI noise inherent in each VI–LAI function generally increased with increasing LAI and the efficiency of most VIs was low at high LAI levels. Among selected VIs, the modified soil-adjusted vegetation index (MSAVI) was the best LAI estimator with the largest dynamic range and the highest sensitivity and overall efficiency for both crops. QuickBird image-estimated LAI with MSAVI–LAI relationships agreed well with ground-measured LAI with the root-mean-square-error of 0.63 and 0.79 for corn and potato canopies, respectively. LAI estimated from the high spatial resolution pixel data exhibited spatial variability similar to the ground plot measurements. For field scale agricultural applications, MSAVI–LAI relationships are easy-to-apply and reasonably accurate for estimating LAI. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2848635e59cf2a41871d79748822c176",
"text": "The ventral pathway is involved in primate visual object recognition. In humans, a central stage in this pathway is an occipito–temporal region termed the lateral occipital complex (LOC), which is preferentially activated by visual objects compared to scrambled images or textures. However, objects have characteristic attributes (such as three-dimensional shape) that can be perceived both visually and haptically. Therefore, object-related brain areas may hold a representation of objects in both modalities. Using fMRI to map object-related brain regions, we found robust and consistent somatosensory activation in the occipito–temporal cortex. This region showed clear preference for objects compared to textures in both modalities. Most somatosensory object-selective voxels overlapped a part of the visual object-related region LOC. Thus, we suggest that neuronal populations in the occipito–temporal cortex may constitute a multimodal object-related network.",
"title": ""
},
{
"docid": "9960d17cb019350a279e4daccccb8e87",
"text": "Deep learning with neural networks is applied by an increasing number of people outside of classic research environments, due to the vast success of the methodology on a wide range of machine perception tasks. While this interest is fueled by beautiful success stories, practical work in deep learning on novel tasks without existing baselines remains challenging. This paper explores the specific challenges arising in the realm of real world tasks, based on case studies from research & development in conjunction with industry, and extracts lessons learned from them. It thus fills a gap between the publication of latest algorithmic and methodical developments, and the usually omitted nitty-gritty of how to make them work. Specifically, we give insight into deep learning projects on face matching, print media monitoring, industrial quality control, music scanning, strategy game playing, and automated machine learning, thereby providing best practices for deep learning in practice.",
"title": ""
},
{
"docid": "a2e0163aebb348d3bfab7ebac119e0c0",
"text": "Herein we report the first study of the oxygen reduction reaction (ORR) catalyzed by a cofacial porphyrin scaffold accessed in high yield (overall 53%) using coordination-driven self-assembly with no chromatographic purification steps. The ORR activity was investigated using chemical and electrochemical techniques on monomeric cobalt(II) tetra(meso-4-pyridyl)porphyrinate (CoTPyP) and its cofacial analogue [Ru8(η6-iPrC6H4Me)8(dhbq)4(CoTPyP)2][OTf]8 (Co Prism) (dhbq = 2,5-dihydroxy-1,4-benzoquinato, OTf = triflate) as homogeneous oxygen reduction catalysts. Co Prism is obtained in one self-assembly step that organizes six total building blocks, two CoTPyP units and four arene-Ru clips, into a cofacial motif previously demonstrated with free-base, Zn(II), and Ni(II) porphyrins. Turnover frequencies (TOFs) from chemical reduction (66 vs 6 h-1) and rate constants of overall homogeneous catalysis (kobs) determined from rotating ring-disk experiments (1.1 vs 0.05 h-1) establish a cofacial enhancement upon comparison of the activities of Co Prism and CoTPyP, respectively. Cyclic voltammetry was used to initially probe the electrochemical catalytic behavior. Rotating ring-disk electrode studies were completed to probe the Faradaic efficiency and obtain an estimate of the rate constant associated with the ORR.",
"title": ""
},
{
"docid": "c1632ead357d08c3e019bb12ff75e756",
"text": "Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred to as network embedding, and it has attracted significant attention in recent years. In this article, we briefly review the existing network embedding methods by two taxonomies. The technical taxonomy focuses on the specific techniques used and divides the existing network embedding methods into two stages, i.e., context construction and objective design. The non-technical taxonomy focuses on the problem setting aspect and categorizes existing work based on whether to preserve special network properties, to consider special network types, or to incorporate additional inputs. Finally, we summarize the main findings based on the two taxonomies, analyze their usefulness, and discuss future directions in this area.",
"title": ""
},
{
"docid": "a34825f20b645a146857c1544c08e66e",
"text": "1. The midterm will have about 5-6 long questions, and about 8-10 short questions. Space will be provided on the actual midterm for you to write your answers. 2. The midterm is meant to be educational, and as such some questions could be quite challenging. Use your time wisely to answer as much as you can! 3. For additional practice, please see CS 229 extra problem sets available at 1. [13 points] Generalized Linear Models Recall that generalized linear models assume that the response variable y (conditioned on x) is distributed according to a member of the exponential family: p(y; η) = b(y) exp(ηT (y) − a(η)), where η = θ T x. For this problem, we will assume η ∈ R. (a) [10 points] Given a training set {(x (i) , y (i))} m i=1 , the loglikelihood is given by (θ) = m i=1 log p(y (i) | x (i) ; θ). Give a set of conditions on b(y), T (y), and a(η) which ensure that the loglikelihood is a concave function of θ (and thus has a unique maximum). Your conditions must be reasonable, and should be as weak as possible. (E.g., the answer \" any b(y), T (y), and a(η) so that (θ) is concave \" is not reasonable. Similarly, overly narrow conditions, including ones that apply only to specific GLMs, are also not reasonable.) (b) [3 points] When the response variable is distributed according to a Normal distribution (with unit variance), we have b(y) = 1 √ 2π e −y 2 2 , T (y) = y, and a(η) = η 2 2. Verify that the condition(s) you gave in part (a) hold for this setting.",
"title": ""
},
{
"docid": "e3823047ccc723783cf05f24ca60d449",
"text": "Social science studies have acknowledged that the social influence of individuals is not identical. Social networks structure and shared text can reveal immense information about users, their interests, and topic-based influence. Although some studies have considered measuring user influence, less has been on measuring and estimating topic-based user influence. In this paper, we propose an approach that incorporates network structure, user-generated content for topic-based influence measurement, and user’s interactions in the network. We perform experimental analysis on Twitter data and show that our proposed approach can effectively measure topic-based user influence.",
"title": ""
},
{
"docid": "ec9eb309dd9d6f72bd7286580e75d36d",
"text": "This paper describes SONDY, a tool for analysis of trends and dynamics in online social network data. SONDY addresses two audiences: (i) end-users who want to explore social activity and (ii) researchers who want to experiment and compare mining techniques on social data. SONDY helps end-users like media analysts or journalists understand social network users interests and activity by providing emerging topics and events detection as well as network analysis functionalities. To this end, the application proposes visualizations such as interactive time-lines that summarize information and colored user graphs that reflect the structure of the network. SONDY also provides researchers an easy way to compare and evaluate recent techniques to mine social data, implement new algorithms and extend the application without being concerned with how to make it accessible. In the demo, participants will be invited to explore information from several datasets of various sizes and origins (such as a dataset consisting of 7,874,772 messages published by 1,697,759 Twitter users during a period of 7 days) and apply the different functionalities of the platform in real-time.",
"title": ""
}
] |
scidocsrr
|
788d40c0b87990e754b1d4a9c98f72ff
|
HoME: a Household Multimodal Environment
|
[
{
"docid": "46c8336f395d04d49369d406f41b0602",
"text": "Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http://www.scenenn.net.",
"title": ""
},
{
"docid": "8e6debae3b3d3394e87e671a14f8819e",
"text": "Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.",
"title": ""
}
] |
[
{
"docid": "4709a4e1165abb5d0018b74495218fc7",
"text": "Network monitoring guides network operators in understanding the current behavior of a network. Therefore, accurate and efficient monitoring is vital to ensure that the network operates according to the intended behavior and then to troubleshoot any deviations. However, the current practice of network-monitoring largely depends on manual operations, and thus enterprises spend a significant portion of their budgets on the workforce that monitor their networks. We analyze present network-monitoring technologies, identify open problems, and suggest future directions. In particular, our findings are based on two different analyses. The first analysis assesses how well present technologies integrate with the entire cycle of network-management operations: design, deployment, and monitoring. Network operators first design network configurations, given a set of requirements, then they deploy the new design, and finally they verify it by continuously monitoring the network’s behavior. One of our observations is that the efficiency of this cycle can be greatly improved by automated deployment of pre-designed configurations, in response to changes in monitored network behavior. Our second analysis focuses on network-monitoring technologies and group issues in these technologies into five categories. Such grouping leads to the identification of major problem groups in network monitoring, e.g., efficient management of increasing amounts of measurements for storage, analysis, and presentation. We argue that continuous effort is needed in improving network-monitoring since the presented problems will become even more serious in the future, as networks grow in size and carry more data. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e235a9eb5df7c5cf1487ae03cc6bc4d3",
"text": "The objective of the proposed scheme is to extract the maximum power at different wind turbine speed. In order to achieve this, MPPT controller is implemented on the rectifier side for the extraction of maximum power. On the inverter side normal closed loop PWM control is carried out. MPPT controller is implemented using fuzzy logic control technique. The fuzzy controller's role here is to track the speed reference of the generator. By doing so and keeping the generator speed at an optimal reference value, maximum power can be attained. This procedure is repeated for various wind turbine speeds, When the wind speed increases the real power generated by the PMSG based WECS increases with the aid of MPPT controller.",
"title": ""
},
{
"docid": "a49ea9c9f03aa2d926faa49f4df63b7a",
"text": "Deep stacked RNNs are usually hard to train. Recent studies have shown that shortcut connections across different RNN layers bring substantially faster convergence. However, shortcuts increase the computational complexity of the recurrent computations. To reduce the complexity, we propose the shortcut block, which is a refinement of the shortcut LSTM blocks. Our approach is to replace the self-connected parts (ct) with shortcuts (hl−2 t ) in the internal states. We present extensive empirical experiments showing that this design performs better than the original shortcuts. We evaluate our method on CCG supertagging task, obtaining a 8% relatively improvement over current state-of-the-art results.",
"title": ""
},
{
"docid": "3fb6cec95fcaa0f8b6c6e4f649591b35",
"text": "This paper presents the performance of DSP, image and 3D applications on recent general-purpose microprocessors using streaming SIMD ISA extensions (integer and oating point). The 9 benchmarks benchmark we use for this evaluation have been optimized for DLP and caches use with SIMD extensions and data prefetch. The result of these cumulated optimizations is a speedup that ranges from 1.9 to 7.1. All the benchmarks were originaly computation bound and 7 becomes memory bandwidth bound with the addition of SIMD and data prefetch. Quadrupling the memory bandwidth has no eeect on original kernels but improves the performance of SIMD kernels by 15-55%.",
"title": ""
},
{
"docid": "842202ed67b71c91630fcb63c4445e38",
"text": "Yaumatei Dermatology Clinic, 12/F Yaumatei Specialist Clinic (New Extension), 143 Battery Street, Yaumatei, Kowloon, Hong Kong A 46-year-old Chinese man presented with one year history of itchy verrucous lesions over penis and scrotum. Skin biopsy confirmed epidermolytic acanthoma. Epidermolytic acanthoma is a rare benign tumour. Before making such a diagnosis, exclusion of other diseases, especially genital warts and bowenoid papulosis is necessary. Treatment of multiple epidermolytic acanthoma remains unsatisfactory.",
"title": ""
},
{
"docid": "b15815b79af412b59b1780538f7dc4ce",
"text": "Aim—To recognise automatically the main components of the fundus on digital colour images. Methods—The main features of a fundus retinal image were defined as the optic disc, fovea, and blood vessels. Methods are described for their automatic recognition and location. 112 retinal images were preprocessed via adaptive, local, contrast enhancement. The optic discs were located by identifying the area with the highest variation in intensity of adjacent pixels. Blood vessels were identified by means of a multilayer perceptron neural net, for which the inputs were derived from a principal component analysis (PCA) of the image and edge detection of the first component of PCA. The foveas were identified using matching correlation together with characteristics typical of a fovea—for example, darkest area in the neighbourhood of the optic disc. The main components of the image were identified by an experienced ophthalmologist for comparison with computerised methods. Results—The sensitivity and specificity of the recognition of each retinal main component was as follows: 99.1% and 99.1% for the optic disc; 83.3% and 91.0% for blood vessels; 80.4% and 99.1% for the fovea. Conclusions—In this study the optic disc, blood vessels, and fovea were accurately detected. The identification of the normal components of the retinal image will aid the future detection of diseases in these regions. In diabetic retinopathy, for example, an image could be analysed for retinopathy with reference to sight threatening complications such as disc neovascularisation, vascular changes, or foveal exudation. (Br J Ophthalmol 1999;83:902–910) The patterns of disease that aVect the fundus of the eye are varied and usually require identification by a trained human observer such as a clinical ophthalmologist. The employment of digital fundus imaging in ophthalmology provides us with digitised data that could be exploited for computerised detection of disease. Indeed, many investigators use computerised image analysis of the eye, under the direction of a human observer. The management of certain diseases would be greatly facilitated if a fully automated method was employed. An obvious example is the care of diabetic retinopathy, which requires the screening of large numbers of patients (approximately 30 000 individuals per million total population ). Screening of diabetic retinopathy may reduce blindness in these patients by 50% and can provide considerable cost savings to public health systems. 9 Most methods, however, require identification of retinopathy by expensive, specifically trained personnel. A wholly automated approach involving fundus image analysis by computer could provide an immediate classification of retinopathy without the need for specialist opinions. Manual semiquantitative methods of image processing have been employed to provide faster and more accurate observation of the degree of macula oedema in fluorescein images. Progress has been made towards the development of a fully automated system to detect microaneurysms in digitised fluorescein angiograms. 16 Fluorescein angiogram images are good for observing some pathologies such as microaneurysms which are indicators of diabetic retinopathy. It is not an ideal method for an automatic screening system since it requires an injection of fluorescein into the body. This disadvantage makes the use of colour fundus images, which do not require an injection of fluorescein, more suitable for automatic",
"title": ""
},
{
"docid": "5506207c5d11a464b1bca39d6092089e",
"text": "Scalp recorded event-related potentials were used to investigate the neural activity elicited by emotionally negative and emotionally neutral words during the performance of a recognition memory task. Behaviourally, the principal difference between the two word classes was that the false alarm rate for negative items was approximately double that for the neutral words. Correct recognition of neutral words was associated with three topographically distinct ERP memory 'old/new' effects: an early, bilateral, frontal effect which is hypothesised to reflect familiarity-driven recognition memory; a subsequent left parietally distributed effect thought to reflect recollection of the prior study episode; and a late onsetting, right-frontally distributed effect held to be a reflection of post-retrieval monitoring. The old/new effects elicited by negative words were qualitatively indistinguishable from those elicited by neutral items and, in the case of the early frontal effect, of equivalent magnitude also. However, the left parietal effect for negative words was smaller in magnitude and shorter in duration than that elicited by neutral words, whereas the right frontal effect was not evident in the ERPs to negative items. These differences between neutral and negative words in the magnitude of the left parietal and right frontal effects were largely attributable to the increased positivity of the ERPs elicited by new negative items relative to the new neutral items. Together, the behavioural and ERP findings add weight to the view that emotionally valenced words influence recognition memory primarily by virtue of their high levels of 'semantic cohesion', which leads to a tendency for 'false recollection' of unstudied items.",
"title": ""
},
{
"docid": "28cba5bf535dabdfadfd1f634a574d52",
"text": "There are several complex business processes in the higher education. As the number of university students has been tripled in Hungary the automation of these task become necessary. The Near Field Communication (NFC) technology provides a good opportunity to support the automated execution of several education related processes. Recently a new challenge is identified at the Budapest University of Technology and Economics. As most of the lecture notes had become available in electronic format the students especially the inexperienced freshman ones did not attend to the lectures significantly decreasing the rate of successful exams. This drove to the decision to elaborate an accurate and reliable information system for monitoring the student's attendance at the lectures. Thus we have developed a novel, NFC technology based business use case of student attendance monitoring. In order to meet the requirements of the use case we have implemented a highly autonomous distributed environment assembled by NFC enabled embedded devices, so-called contactless terminals and a scalable backoffice. Beside the opportunity of contactless card based student identification the terminals support biometric identification by fingerprint reading. These features enable the implementation of flexible and secure identification scenarios. The attendance monitoring use case has been tested in a pilot project involving about 30 access terminals and more that 1000 students. In this paper we are introducing the developed attendance monitoring use case, the implemented NFC enabled system, and the experiences gained during the pilot project.",
"title": ""
},
{
"docid": "7f84e215df3d908249bde3be7f2b3cab",
"text": "With the emergence of ever-growing advanced vehicular applications, the challenges to meet the demands from both communication and computation are increasingly prominent. Without powerful communication and computational support, various vehicular applications and services will still stay in the concept phase and cannot be put into practice in the daily life. Thus, solving this problem is of great importance. The existing solutions, such as cellular networks, roadside units (RSUs), and mobile cloud computing, are far from perfect because they highly depend on and bear the cost of additional infrastructure deployment. Given tremendous number of vehicles in urban areas, putting these underutilized vehicular resources into use offers great opportunity and value. Therefore, we conceive the idea of utilizing vehicles as the infrastructures for communication and computation, named vehicular fog computing (VFC), which is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. By aggregating abundant resources of individual vehicles, the quality of services and applications can be enhanced greatly. In particular, by discussing four types of scenarios of moving and parked vehicles as the communication and computational infrastructures, we carry on a quantitative analysis of the capacities of VFC. We unveil an interesting relationship among the communication capability, connectivity, and mobility of vehicles, and we also find out the characteristics about the pattern of parking behavior, which benefits from the understanding of utilizing the vehicular resources. Finally, we discuss the challenges and open problems in implementing the proposed VFC system as the infrastructures. Our study provides insights for this novel promising paradigm, as well as research topics about vehicular information infrastructures.",
"title": ""
},
{
"docid": "732f651b2ec4570a1229d8427b166c84",
"text": "hundreds of specialized apps? Who could have anticipated the power of our everyday devices to capture our every moment and movement? Cameras, GPS tracking, sensors—a phone is no longer just a phone; it is a powerful personal computing device loaded with access to interactive services that you carry with you everywhere you go. In response to these technological changes, user populations have diversified and grown. Once limited to workplaces and used only by experts, interactive computational devices and applications are now widely available for everyday use, anywhere, anytime by any and all of us. Though complex institutional infrastructures and communications networks still provide the backbone of our digital communications world, HCI research has strongly affected the marketability of these new technologies and networked systems. Human-computer interaction is a discipline concerned with the design, evaluation, and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them. —Thomas T. Hewett et al., 1992",
"title": ""
},
{
"docid": "d6f235abee285021a733b79b6d9c4411",
"text": "We address the problem of inverse reinforcement learning in Markov decision processes where the agent is risk-sensitive. In particular, we model risk-sensitivity in a reinforcement learning framework by making use of models of human decision-making having their origins in behavioral psychology, behavioral economics, and neuroscience. We propose a gradient-based inverse reinforcement learning algorithm that minimizes a loss function defined on the observed behavior. We demonstrate the performance of the proposed technique on two examples, the first of which is the canonical Grid World example and the second of which is a Markov decision process modeling passengers decisions regarding ride-sharing. In the latter, we use pricing and travel time data from a ride-sharing company to construct the transition probabilities and rewards of the Markov decision process.",
"title": ""
},
{
"docid": "db55d7b7e0185d872b27c89c3892a289",
"text": "Bitcoin relies on the Unspent Transaction Outputs (UTXO) set to efficiently verify new generated transactions. Every unspent output, no matter its type, age, value or length is stored in every full node. In this paper we introduce a tool to study and analyze the UTXO set, along with a detailed description of the set format and functionality. Our analysis includes a general view of the set and quantifies the difference between the two existing formats up to the date. We also provide an accurate analysis of the volume of dust and unprofitable outputs included in the set, the distribution of the block height in which the outputs where included, and the use of non-standard outputs.",
"title": ""
},
{
"docid": "94a2b34eaa02ffeffdde5aa74e7836d2",
"text": "Drought is a stochastic natural hazard that is instigated by intense and persistent shortage of precipitation. Following an initial meteorological phenomenon, subsequent impacts are realized on agriculture and hydrology. Among the natural hazards, droughts possess certain unique features; in addition to delayed effects, droughts vary by multiple dynamic dimensions including severity and duration, which in addition to causing a pervasive and subjective network of impacts makes them difficult to characterize. In order manage drought, drought characterization is essential enabling both retrospective analyses (e.g., severity versus impacts analysis) and prospective planning (e.g., risk assessment). The adaptation of a simplified method by drought indices has facilitated drought characterization for various users and entities. More than 100 drought indices have so far been proposed, some of which are operationally used to characterize drought using gridded maps at regional and national levels. These indices correspond to different types of drought, including meteorological, agricultural, and hydrological drought. By quantifying severity levels and declaring drought’s start and end, drought indices currently aid in a variety of operations including drought early warning and monitoring and contingency planning. Given their variety and ongoing development, it is crucial to provide a comprehensive overview of available drought indices that highlights their difference and examines the trend in their development. This paper reviews 74 operational and proposed drought indices and describes research directions.",
"title": ""
},
{
"docid": "5dc78e62ca88a6a5f253417093e2aa4d",
"text": "This paper surveys the scientific and trade literature on cybersecurity for unmanned aerial vehicles (UAV), concentrating on actual and simulated attacks, and the implications for small UAVs. The review is motivated by the increasing use of small UAVs for inspecting critical infrastructures such as the electric utility transmission and distribution grid, which could be a target for terrorism. The paper presents a modified taxonomy to organize cyber attacks on UAVs and exploiting threats by Attack Vector and Target. It shows that, by Attack Vector, there has been one physical attack and ten remote attacks. By Target, there have been six attacks on GPS (two jamming, four spoofing), two attacks on the control communications stream (a deauthentication attack and a zero-day vulnerabilities attack), and two attacks on data communications stream (two intercepting the data feed, zero executing a video replay attack). The paper also divides and discusses the findings by large or small UAVs, over or under 25 kg, but concentrates on small UAVs. The survey concludes that UAV-related research to counter cybersecurity threats focuses on GPS Jamming and Spoofing, but ignores attacks on the controls and data communications stream. The gap in research on attacks on the data communications stream is concerning, as an operator can see a UAV flying off course due to a control stream attack but has no way of detecting a video replay attack (substitution of a video feed).",
"title": ""
},
{
"docid": "fe5aebde601f7f44cfb87e9eea268fef",
"text": "Mining with big data or big data mining has become an active research area. It is very difficult using current methodologies and data mining software tools for a single personal computer to efficiently deal with very large datasets. The parallel and cloud computing platforms are considered a better solution for big data mining. The concept of parallel computing is based on dividing a large problem into smaller ones and each of them is carried out by one single processor individually. In addition, these processes are performed concurrently in a distributed and parallel manner. There are two common methodologies used to tackle the big data problem. The first one is the distributed procedure based on the data parallelism paradigm, where a given big dataset can be manually divided into n subsets, and n algorithms are respectively executed for the corresponding n subsets. The final result can be obtained from a combination of the outputs produced by the n algorithms. The second one is the MapReduce based procedure under the cloud computing platform. This procedure is composed of the map and reduce processes, in which the former performs filtering and * Corresponding author: Chih-Fong Tsai Department of Information Management, National Central University, Taiwan; Tel: +886-3-422-7151 ; Fax: +886-3-4254604 E-mail address: [email protected]",
"title": ""
},
{
"docid": "bc0ca1e4f698fff9277e5bbcf8c8b797",
"text": "This paper presents a hybrid method combining a vector fitting (VF) and a global optimization for diagnosing coupled resonator bandpass filters. The method can extract coupling matrix from the measured or electromagnetically simulated admittance parameters (Y -parameters) of a narrow band coupled resonator bandpass filter with losses. The optimization method is used to remove the phase shift effects of the measured or the EM simulated Y -parameters caused by the loaded transmission lines at the input/output ports of a filter. VF is applied to determine the complex poles and residues of the Y -parameters without phase shift. The coupling matrix can be extracted (also called the filter diagnosis) by these complex poles and residues. The method can be used to computer-aided tuning (CAT) of a filter in the stage of this filter design and/or product process to accelerate its physical design. Three application examples illustrate the validity of the proposed method.",
"title": ""
},
{
"docid": "1cdbeb23bf32c20441a208b3c3a05480",
"text": "Indoor object localization can enable many ubicomp applications, such as asset tracking and object-related activity recognition. Most location and tracking systems rely on either battery-powered devices which create cost and maintenance issues or cameras which have accuracy and privacy issues. This paper introduces a system that is able to detect the 3D position and motion of a battery-free RFID tag embedded with an ultrasound detector and an accelerometer. Combining tags' acceleration with location improves the system's power management and supports activity recognition. We characterize the system's localization performance in open space as well as implement it in a smart wet lab application. The system is used to track real-time location and motion of the tags in the wet lab as well as recognize pouring actions performed on the objects to which the tag is attached. The median localization accuracy is 7.6cm -- (3.1, 5, 1.9)cm for each (x, y, z) axis -- with max update rates of 15 Sample/s using single RFID reader antenna.",
"title": ""
},
{
"docid": "b455105e5b82f6226198866f324132d1",
"text": "The creation of both a functionally and aesthetically pleasing nasal tip contour is demanding and depends on various different parameters. Typically, procedures are performed with emphasis on narrowing the nasal tip structure. Excisional techniques alone inevitably lead to a reduction in skeletal support and are often prone to unpredictable deformities. But also long-term results of classical suture techniques have shown unfavorable outcomes. Particularly, pinching of the ala and a displacement of the caudal margin of the lateral crus below the cephalic margin belong to this category. A characteristic loss of structural continuity between the domes and the alar lobule and an undesirable shadowing occur. These effects lead to an unnatural appearance of the nasal tip and frequently to impaired nasal breathing. Stability and configuration of the alar cartilages alone do not allow for an adequate evaluation of the nasal tip contour. Rather a three-dimensional approach is required to describe all nasal tip structures. Especially, the rotational angle of the alar surface as well as the longitudinal axis of the lateral crus in relation to cranial septum should be considered in the three-dimensional analysis. Taking the various parameters into account, the authors present new aspects in nasal tip surgery which contribute to the creation of a functionally and aesthetically pleasing as well as durable nasal tip contour.",
"title": ""
},
{
"docid": "9af703a47d382926698958fba88c1e1a",
"text": "Nowadays, the use of agile software development methods like Scrum is common in industry and academia. Considering the current attacking landscape, it is clear that developing secure software should be a main concern in all software development projects. In traditional software projects, security issues require detailed planning in an initial planning phase, typically resulting in a detailed security analysis (e.g., threat and risk analysis), a security architecture, and instructions for security implementation (e.g., specification of key sizes and cryptographic algorithms to use). Agile software development methods like Scrum are known for reducing the initial planning phases (e.g., sprint 0 in Scrum) and for focusing more on producing running code. Scrum is also known for allowing fast adaption of the emerging software to changes of customer wishes. For security, this means that it is likely that there are no detailed security architecture or security implementation instructions from the start of the project. It also means that a lot of design decisions will be made during the runtime of the project. Hence, to address security in Scrum, it is necessary to consider security issues throughout the whole software development process. Secure Scrum is a variation of the Scrum framework with special focus on the development of secure software throughout the whole software development process. It puts emphasis on implementation of security related issues without the need of changing the underlying Scrum process or influencing team dynamics. Secure Scrum allows even non-security experts to spot security issues, to implement security features, and to verify implementations. A field test of Secure Scrum shows that the security level of software developed using Secure Scrum is higher then the security level of software developed using standard Scrum.",
"title": ""
},
{
"docid": "64d9f6973697749b6e2fa330101cbc77",
"text": "Evidence is presented that recognition judgments are based on an assessment of familiarity, as is described by signal detection theory, but that a separate recollection process also contributes to performance. In 3 receiver-operating characteristics (ROC) experiments, the process dissociation procedure was used to examine the contribution of these processes to recognition memory. In Experiments 1 and 2, reducing the length of the study list increased the intercept (d') but decreased the slope of the ROC and increased the probability of recollection but left familiarity relatively unaffected. In Experiment 3, increasing study time increased the intercept but left the slope of the ROC unaffected and increased both recollection and familiarity. In all 3 experiments, judgments based on familiarity produced a symmetrical ROC (slope = 1), but recollection introduced a skew such that the slope of the ROC decreased.",
"title": ""
}
] |
scidocsrr
|
0f7906ae6cc949541333e43ff695879a
|
Statistical transformer networks: learning shape and appearance models via self supervision
|
[
{
"docid": "de1f35d0e19cafc28a632984f0411f94",
"text": "Large-pose face alignment is a very challenging problem in computer vision, which is used as a prerequisite for many important vision tasks, e.g, face recognition and 3D face reconstruction. Recently, there have been a few attempts to solve this problem, but still more research is needed to achieve highly accurate results. In this paper, we propose a face alignment method for large-pose face images, by combining the powerful cascaded CNN regressor method and 3DMM. We formulate the face alignment as a 3DMM fitting problem, where the camera projection matrix and 3D shape parameters are estimated by a cascade of CNN-based regressors. The dense 3D shape allows us to design pose-invariant appearance features for effective CNN learning. Extensive experiments are conducted on the challenging databases (AFLW and AFW), with comparison to the state of the art.",
"title": ""
},
{
"docid": "6936b03672c64798ca4be118809cc325",
"text": "We present a deep learning framework for accurate visual correspondences and demonstrate its effectiveness for both geometric and semantic matching, spanning across rigid motions to intra-class shape or appearance variations. In contrast to previous CNN-based approaches that optimize a surrogate patch similarity objective, we use deep metric learning to directly learn a feature space that preserves either geometric or semantic similarity. Our fully convolutional architecture, along with a novel correspondence contrastive loss allows faster training by effective reuse of computations, accurate gradient computation through the use of thousands of examples per image pair and faster testing with O(n) feedforward passes for n keypoints, instead of O(n) for typical patch similarity methods. We propose a convolutional spatial transformer to mimic patch normalization in traditional features like SIFT, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations. Extensive experiments on KITTI, PASCAL and CUB-2011 datasets demonstrate the significant advantages of our features over prior works that use either hand-constructed or learned features.",
"title": ""
},
{
"docid": "b7387928fe8307063cafd6723c0dd103",
"text": "We introduce learned attention models into the radio machine learning domain for the task of modulation recognition by leveraging spatial transformer networks and introducing new radio domain appropriate transformations. This attention model allows the network to learn a localization network capable of synchronizing and normalizing a radio signal blindly with zero knowledge of the signal's structure based on optimization of the network for classification accuracy, sparse representation, and regularization. Using this architecture we are able to outperform our prior results in accuracy vs signal to noise ratio against an identical system without attention, however we believe such an attention model has implication far beyond the task of modulation recognition.",
"title": ""
},
{
"docid": "4551ee1978ef563259c8da64cc0d1444",
"text": "We present an approach to matching images of objects in fine-grained datasets without using part annotations, with an application to the challenging problem of weakly supervised single-view reconstruction. This is in contrast to prior works that require part annotations, since matching objects across class and pose variations is challenging with appearance features alone. We overcome this challenge through a novel deep learning architecture, WarpNet, that aligns an object in one image with a different object in another. We exploit the structure of the fine-grained dataset to create artificial data for training this network in an unsupervised-discriminative learning approach. The output of the network acts as a spatial prior that allows generalization at test time to match real images across variations in appearance, viewpoint and articulation. On the CUB-200-2011 dataset of bird categories, we improve the AP over an appearance-only network by 13.6%. We further demonstrate that our WarpNet matches, together with the structure of fine-grained datasets, allow single-view reconstructions with quality comparable to using annotated point correspondences.",
"title": ""
}
] |
[
{
"docid": "39c2c3e7f955425cd9aaad1951d13483",
"text": "This paper proposes a novel nature-inspired algorithm called Multi-Verse Optimizer (MVO). The main inspirations of this algorithm are based on three concepts in cosmology: white hole, black hole, and wormhole. The mathematical models of these three concepts are developed to perform exploration, exploitation, and local search, respectively. The MVO algorithm is first benchmarked on 19 challenging test problems. It is then applied to five real engineering problems to further confirm its performance. To validate the results, MVO is compared with four well-known algorithms: Grey Wolf Optimizer, Particle Swarm Optimization, Genetic Algorithm, and Gravitational Search Algorithm. The results prove that the proposed algorithm is able to provide very competitive results and outperforms the best algorithms in the literature on the majority of the test beds. The results of the real case studies also demonstrate the potential of MVO in solving real problems with unknown search spaces. Note that the source codes of the proposed MVO algorithm are publicly available at http://www.alimirjalili.com/MVO.html .",
"title": ""
},
{
"docid": "1afa72a646fcfa5dfe632126014f59be",
"text": "The virulence factor database (VFDB, http://www.mgc.ac.cn/VFs/) has served as a comprehensive repository of bacterial virulence factors (VFs) for >7 years. Bacterial virulence is an exciting and dynamic field, due to the availability of complete sequences of bacterial genomes and increasing sophisticated technologies for manipulating bacteria and bacterial genomes. The intricacy of virulence mechanisms offers a challenge, and there exists a clear need to decipher the 'language' used by VFs more effectively. In this article, we present the recent major updates of VFDB in an attempt to summarize some of the most important virulence mechanisms by comparing different compositions and organizations of VFs from various bacterial pathogens, identifying core components and phylogenetic clades and shedding new light on the forces that shape the evolutionary history of bacterial pathogenesis. In addition, the 2012 release of VFDB provides an improved user interface.",
"title": ""
},
{
"docid": "fa03fe8103c69dbb8328db899400cce4",
"text": "While deploying large scale heterogeneous robots in a wide geographical area, communicating among robots and robots with a central entity pose a major challenge due to robotic motion, distance and environmental constraints. In a cloud robotics scenario, communication challenges result in computational challenges as the computation is being performed at the cloud. Therefore fog nodes are introduced which shorten the distance between the robots and cloud and reduce the communication challenges. Fog nodes also reduce the computation challenges with extra compute power. However in the above scenario, maintaining continuous communication between the cloud and the robots either directly or via fog nodes is difficult. Therefore we propose a Distributed Cooperative Multi-robots Communication (DCMC) model where Robot to Robot (R2R), Robot to Fog (R2F) and Fog to Cloud (F2C) communications are being realized. Once the DCMC framework is formed, each robot establishes communication paths to maintain a consistent communication with the cloud. Further, due to mobility and environmental condition, maintaining link with a particular robot or a fog node becomes difficult. This requires pre-knowledge of the link quality such that appropriate R2R or R2F communication can be made possible. In a scenario where Global Positioning System (GPS) and continuous scanning of channels are not advisable due to energy or security constraints, we need an accurate link prediction mechanism. In this paper we propose a Collaborative Robotic based Link Prediction (CRLP) mechanism which predicts reliable communication and quantify link quality evolution in R2R and R2F communications without GPS and continuous channel scanning. We have validated our proposed schemes using joint Gazebo/Robot Operating System (ROS), MATLAB and Network Simulator (NS3) based simulations. Our schemes are efficient in terms of energy saving and accurate link prediction.",
"title": ""
},
{
"docid": "95af5f635e876c4c66711e86fa25d968",
"text": "Markerless vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. The significant research effort in this domain has been motivated by the fact that many application areas, including surveillance, Human–Computer Interaction and automatic annotation, will benefit from a robust solution. In this paper, we discuss the characteristics of human motion analysis. We divide the analysis into a modeling and an estimation phase. Modeling is the construction of the likelihood function, estimation is concerned with finding the most likely pose given the likelihood surface. We discuss model-free approaches separately. This taxonomy allows us to highlight trends in the domain and to point out limitations of the current state of the art. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "83e7119065ededfd731855fe76e76207",
"text": "Introduction: In recent years, the maturity model research has gained wide acceptance in the area of information systems and many Service Oriented Architecture (SOA) maturity models have been proposed. However, there are limited empirical studies on in-depth analysis and validation of SOA Maturity Models (SOAMMs). Objectives: The objective is to present a comprehensive comparison of existing SOAMMs to identify the areas of improvement and the research opportunities. Methods: A systematic literature review is conducted to explore the SOA adoption maturity studies. Results: A total of 20 unique SOAMMs are identified and analyzed in detail. A comparison framework is defined based on SOAMM design and usage support. The results provide guidance for SOA practitioners who are involved in selection, design, and implementation of SOAMMs. Conclusion: Although all SOAMMs propose a measurement framework, only a few SOAMMs provide guidance for selecting and prioritizing improvement measures. The current state of research shows that a gap exists in both prescriptive and descriptive purpose of SOAMM usage and it indicates the need for further research.",
"title": ""
},
{
"docid": "936048690fb043434c3ee0060c5bf7a5",
"text": "This paper asks whether case-based reasoning is an artificial intelligence (AI) technology like rule-based reasoning, neural networks or genetic algorithms or whether it is better described as a methodology for problem solving, that may use any appropriate technology. By describing four applications of case-based reasoning (CBR), that variously use: nearest neighbour, induction, fuzzy logic and SQL, the author shows that CBR is a methodology and not a technology. The implications of this are discussed. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "eef87d8905b621d2d0bb2b66108a56c1",
"text": "We study deep learning approaches to inferring numerical coordinates for points of interest in an input image. Existing convolutional neural network-based solutions to this problem either take a heatmap matching approach or regress to coordinates with a fully connected output layer. Neither of these approaches is ideal, since the former is not entirely differentiable, and the latter lacks inherent spatial generalization. We propose our differentiable spatial to numerical transform (DSNT) to fill this gap. The DSNT layer adds no trainable parameters, is fully differentiable, and exhibits good spatial generalization. Unlike heatmap matching, DSNT works well with low heatmap resolutions, so it can be dropped in as an output layer for a wide range of existing fully convolutional architectures. Consequently, DSNT offers a better trade-off between inference speed and prediction accuracy compared to existing techniques. When used to replace the popular heatmap matching approach used in almost all state-of-the-art methods for pose estimation, DSNT gives better prediction accuracy for all model architectures tested.",
"title": ""
},
{
"docid": "2d73a7ab1e5a784d4755ed2fe44078db",
"text": "Over the last years, many papers have been published about how to use machine learning for classifying postings on microblogging platforms like Twitter, e.g., in order to assist users to reach tweets that interest them. Typically, the automatic classification results are then evaluated against a gold standard classification which consists of either (i) the hashtags of the tweets' authors, or (ii) manual annotations of independent human annotators. In this paper, we show that there are fundamental differences between these two kinds of gold standard classifications, i.e., human annotators are more likely to classify tweets like other human annotators than like the tweets' authors. Furthermore, we discuss how these differences may influence the evaluation of automatic classifications, like they may be achieved by Latent Dirichlet Allocation (LDA). We argue that researchers who conduct machine learning experiments for tweet classification should pay particular attention to the kind of gold standard they use. One may even argue that hashtags are not appropriate as a gold standard for tweet classification.",
"title": ""
},
{
"docid": "18caf39ce8802f69a463cc1a4b276679",
"text": "In this thesis we describe the formal verification of a fully IEEE compliant floating point unit (FPU). The hardware is verified on the gate-level against a formalization of the IEEE standard. The verification is performed using the theorem proving system PVS. The FPU supports both single and double precision floating point numbers, normal and denormal numbers, all four IEEE rounding modes, and exceptions as required by the standard. Beside the verification of the combinatorial correctness of the FPUs we pipeline the FPUs to allow the integration into an out-of-order processor. We formally define the correctness criterion the pipelines must obey in order to work properly within the processor. We then describe a new methodology based on combining model checking and theorem proving for the verification of the pipelines.",
"title": ""
},
{
"docid": "9fc869c7e7d901e418b1b69d636cbd33",
"text": "Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance. However, little is published which parameters and design choices should be evaluated or selected making the correct hyperparameter optimization often a “black art that requires expert experiences” (Snoek et al., 2012). In this paper, we evaluate the importance of different network design choices and hyperparameters for five common linguistic sequence tagging tasks (POS, Chunking, NER, Entity Recognition, and Event Detection). We evaluated over 50.000 different setups and found, that some parameters, like the pre-trained word embeddings or the last layer of the network, have a large impact on the performance, while other parameters, for example the number of LSTM layers or the number of recurrent units, are of minor importance. We give a recommendation on a configuration that performs well among different tasks. The optimized implementation of our BiLSTM-CRF architecture is publicly available.1 This publication explains in detail the experimental setup and discusses the results. A condensed version of this paper was presented at EMNLP 2017 (Reimers and Gurevych, 2017).2",
"title": ""
},
{
"docid": "9f660caf74f1708339f7ca2ee067dc95",
"text": "Abstruct-Vehicle following and its effects on traffic flow has been an active area of research. Human driving involves reaction times, delays, and human errors that affect traffic flow adversely. One way to eliminate human errors and delays in vehicle following is to replace the human driver with a computer control system and sensors. The purpose of this paper is to develop an autonomous intelligent cruise control (AICC) system for automatic vehicle following, examine its effect on traffic flow, and compare its performance with that of the human driver models. The AICC system developed is not cooperative; Le., it does not exchange information with other vehicles and yet is not susceptible to oscillations and \" slinky \" effects. The elimination of the \" slinky \" effect is achieved by using a safety distance separation rule that is proportional to the vehicle velocity (constant time headway) and by designing the control system appropriately. The performance of the AICC system is found to be superior to that of the human driver models considered. It has a faster and better transient response that leads to a much smoother and faster traffic flow. Computer simulations are used to study the performance of the proposed AICC system and analyze vehicle following in a single lane, without passing, under manual and automatic control. In addition, several emergency situations that include emergency stopping and cut-in cases were simulated. The simulation results demonstrate the effectiveness of the AICC system and its potentially beneficial effects on traffic flow.",
"title": ""
},
{
"docid": "6ced60cadf69a3cd73bcfd6a3eb7705e",
"text": "This review article summarizes the current literature regarding the analysis of running gait. It is compared to walking and sprinting. The current state of knowledge is presented as it fits in the context of the history of analysis of movement. The characteristics of the gait cycle and its relationship to potential and kinetic energy interactions are reviewed. The timing of electromyographic activity is provided. Kinematic and kinetic data (including center of pressure measurements, raw force plate data, joint moments, and joint powers) and the impact of changes in velocity on these findings is presented. The status of shoewear literature, alterations in movement strategies, the role of biarticular muscles, and the springlike function of tendons are addressed. This type of information can provide insight into injury mechanisms and training strategies. Copyright 1998 Elsevier Science B.V.",
"title": ""
},
{
"docid": "842cd58edd776420db869e858be07de4",
"text": "A nationwide interoperable public safety wireless broadband network is being planned by the First Responder Network Authority (FirstNet) under the auspices of the United States government. The public safety network shall provide the needed wireless coverage in the wake of an incident or a disaster. This paper proposes a drone-assisted multi-hop device-to-device (D2D) communication scheme as a means to extend the network coverage over regions where it is difficult to deploy a landbased relay. The resource are shared using either time division or frequency division scheme. Efficient algorithms are developed to compute the optimal position of the drone for maximizing the data rate, which are shown to be highly effective via simulations.",
"title": ""
},
{
"docid": "0aa566453fa3bd4bedec5ac3249d410a",
"text": "The approach of using passage-level evidence for document retrieval has shown mixed results when it is applied to a variety of test beds with different characteristics. One main reason of the inconsistent performance is that there exists no unified framework to model the evidence of individual passages within a document. This paper proposes two probabilistic models to formally model the evidence of a set of top ranked passages in a document. The first probabilistic model follows the retrieval criterion that a document is relevant if any passage in the document is relevant, and models each passage independently. The second probabilistic model goes a step further and incorporates the similarity correlations among the passages. Both models are trained in a discriminative manner. Furthermore, we present a combination approach to combine the ranked lists of document retrieval and passage-based retrieval.\n An extensive set of experiments have been conducted on four different TREC test beds to show the effectiveness of the proposed discriminative probabilistic models for passage-based retrieval. The proposed algorithms are compared with a state-of-the-art document retrieval algorithm and a language model approach for passage-based retrieval. Furthermore, our combined approach has been shown to provide better results than both document retrieval and passage-based retrieval approaches.",
"title": ""
},
{
"docid": "5aaba72970d1d055768e981f7e8e3684",
"text": "A hash table is a fundamental data structure in computer science that can offer rapid storage and retrieval of data. A leading implementation for string keys is the cacheconscious array hash table. Although fast with strings, there is currently no information in the research literatur e on its performance with integer keys. More importantly, we do not know how efficient an integer-based array hash table is compared to other hash tables that are designed for integers, such as bucketized cuckoo hashing. In this paper, we explain how to efficiently implement an array hash table for integers. We then demonstrate, through careful experimental evaluations, which hash table, whether it be a bucketized cuckoo hash table, an array hash table, or alternative hash table schemes such as linear probing, offers the best performance—with respect to time and space— for maintaining a large dictionary of integers in-memory, on a current cache-oriented processor.",
"title": ""
},
{
"docid": "69ddedba98e93523f698529716cf2569",
"text": "A fast and scalable graph processing method becomes increasingly important as graphs become popular in a wide range of applications and their sizes are growing rapidly. Most of distributed graph processing methods require a lot of machines equipped with a total of thousands of CPU cores and a few terabyte main memory for handling billion-scale graphs. Meanwhile, GPUs could be a promising direction toward fast processing of large-scale graphs by exploiting thousands of GPU cores. All of the existing methods using GPUs, however, fail to process large-scale graphs that do not fit in main memory of a single machine. Here, we propose a fast and scalable graph processing method GTS that handles even RMAT32 (64 billion edges) very efficiently only by using a single machine. The proposed method stores graphs in PCI-E SSDs and executes a graph algorithm using thousands of GPU cores while streaming topology data of graphs to GPUs via PCI-E interface. GTS is fast due to no communication overhead and scalable due to no data duplication from graph partitioning among machines. Through extensive experiments, we show that GTS consistently and significantly outperforms the major distributed graph processing methods, GraphX, Giraph, and PowerGraph, and the state-of-the-art GPU-based method TOTEM.",
"title": ""
},
{
"docid": "89b54aa0009598a4cb159b196f3749ee",
"text": "Several methods and techniques are potentially useful for the preparation of microparticles in the field of controlled drug delivery. The type and the size of the microparticles, the entrapment, release characteristics and stability of drug in microparticles in the formulations are dependent on the method used. One of the most common methods of preparing microparticles is the single emulsion technique. Poorly soluble, lipophilic drugs are successfully retained within the microparticles prepared by this method. However, the encapsulation of highly water soluble compounds including protein and peptides presents formidable challenges to the researchers. The successful encapsulation of such compounds requires high drug loading in the microparticles, prevention of protein and peptide degradation by the encapsulation method involved and predictable release, both rate and extent, of the drug compound from the microparticles. The above mentioned problems can be overcome by using the double emulsion technique, alternatively called as multiple emulsion technique. Aiming to achieve this various techniques have been examined to prepare stable formulations utilizing w/o/w, s/o/w, w/o/o, and s/o/o type double emulsion methods. This article reviews the current state of the art in double emulsion based technologies for the preparation of microparticles including the investigation of various classes of substances that are pharmaceutically and biopharmaceutically active.",
"title": ""
},
{
"docid": "ad4596e24f157653a36201767d4b4f3b",
"text": "We present a character-based model for joint segmentation and POS tagging for Chinese. The bidirectional RNN-CRF architecture for general sequence tagging is adapted and applied with novel vector representations of Chinese characters that capture rich contextual information and lower-than-character level features. The proposed model is extensively evaluated and compared with a state-of-the-art tagger respectively on CTB5, CTB9 and UD Chinese. The experimental results indicate that our model is accurate and robust across datasets in different sizes, genres and annotation schemes. We obtain stateof-the-art performance on CTB5, achieving 94.38 F1-score for joint segmentation and POS tagging.",
"title": ""
},
{
"docid": "708915f99102f80b026b447f858e3778",
"text": "One of the main obstacles to broad application of reinforcement learning methods is the parameter sensitivity of our core learning algorithms. In many large-scale applications, online computation and function approximation represent key strategies in scaling up reinforcement learning algorithms. In this setting, we have effective and reasonably well understood algorithms for adapting the learning-rate parameter, online during learning. Such meta-learning approaches can improve robustness of learning and enable specialization to current task, improving learning speed. For temporaldifference learning algorithms which we study here, there is yet another parameter, λ, that similarly impacts learning speed and stability in practice. Unfortunately, unlike the learning-rate parameter, λ parametrizes the objective function that temporal-difference methods optimize. Different choices of λ produce different fixed-point solutions, and thus adapting λ online and characterizing the optimization is substantially more complex than adapting the learningrate parameter. There are no meta-learning method for λ that can achieve (1) incremental updating, (2) compatibility with function approximation, and (3) maintain stability of learning under both on and off-policy sampling. In this paper we contribute a novel objective function for optimizing λ as a function of state rather than time. We derive a new incremental, linear complexity λ-adaption algorithm that does not require offline batch updating or access to a model of the world, and present a suite of experiments illustrating the practicality of our new algorithm in three different settings. Taken together, our contributions represent a concrete step towards black-box application of temporaldifference learning methods in real world problems.",
"title": ""
},
{
"docid": "021bed3f2c2f09db1bad7d11108ee430",
"text": "This is a review of Introduction to Circle Packing: The Theory of Discrete Analytic Functions, by Kenneth Stephenson, Cambridge University Press, Cambridge UK, 2005, pp. i-xii, 1–356, £42, ISBN-13 978-0-521-82356-2. 1. The Context: A Personal Reminiscence Two important stories in the recent history of mathematics are those of the geometrization of topology and the discretization of geometry. Having come of age during the unfolding of these stories as both observer and practitioner, this reviewer does not hold the detachment of the historian and, perhaps, can be forgiven the personal accounting that follows, along with its idiosyncratic telling. The first story begins at a time when the mathematical world is entrapped by abstraction. Bourbaki reigns and generalization is the cry of the day. Coxeter is a curious doddering uncle, at best tolerated, at worst vilified as a practitioner of the unsophisticated mathematics of the nineteenth century. 1.1. The geometrization of topology. It is 1978 and I have just begun my graduate studies in mathematics. There is some excitement in the air over ideas of Bill Thurston that purport to offer a way to resolve the Poincaré conjecture by using nineteenth century mathematics—specifically, the noneuclidean geometry of Lobachevski and Bolyai—to classify all 3-manifolds. These ideas finally appear in a set of notes from Princeton a couple of years later, and the notes are both fascinating and infuriating—theorems are left unstated and often unproved, chapters are missing never to be seen, the particular dominates—but the notes are bulging with beautiful and exciting ideas, often with but sketches of intricate arguments to support the landscape that Thurston sees as he surveys the topology of 3-manifolds. Thurston’s vision is a throwback to the previous century, having much in common with the highly geometric, highly particular landscape that inspired Felix Klein and Max Dehn. These geometers walked around and within Riemann surfaces, one of the hot topics of the day, knew them intimately, and understood them in their particularity, not from the rarified heights that captured the mathematical world in general, and topology in particular, in the period from the 1930’s until the 1970’s. The influence of Thurston’s Princeton notes on the development of topology over the next 30 years would be pervasive, not only in its mathematical content, but AMS SUBJECT CLASSIFICATION: 52C26",
"title": ""
}
] |
scidocsrr
|
5cebfafaaa63c1b9b55d705f5fbc5de4
|
SIW periodic leaky wave antenna with improved H-plane radiation pattern using baffles
|
[
{
"docid": "fcf5a390d9757ab3c8958638ccc54925",
"text": "This paper presents design equations for the microstrip-to-Substrate Integrated Waveguide (SIW) transition. The transition is decomposed in two distinct parts: the microstrip taper and the microstrip-to-SIW step. Analytical equations are used for the microstrip taper. As for the step, the microstrip is modeled by an equivalent transverse electromagnetic (TEM) waveguide. An equation relating the optimum microstrip width to the SIW width is derived using a curve fitting technique. It is shown that when the step is properly sized, it provides a return loss superior to 20 dB. Three design examples are presented using different substrate permittivity and frequency bands between 18 GHz and 75 GHz. An experimental verification is also presented. The presented technique allows to design transitions covering the complete single-mode SIW bandwidth.",
"title": ""
}
] |
[
{
"docid": "17a1de2e932b17fd3c787baa456219b6",
"text": "With the rise of massive open online courses (MOOCs), tens of millions of learners can now enroll in more than 1,000 courses via MOOC platforms such as Coursera and edX. As a result, a huge amount of data has been collected. Compared with traditional education records, the data from MOOCs has much finer granularity and also contains new pieces of information. It is the first time in history that such comprehensive data related to learning behavior has become available for analysis. What roles can visual analytics play in this MOOC movement? The authors survey the current practice and argue that MOOCs provide an opportunity for visualization researchers and that visual analytics systems for MOOCs can benefit a range of end users such as course instructors, education researchers, students, university administrators, and MOOC providers.",
"title": ""
},
{
"docid": "4ebdfc3fe891f11902fb94973b6be582",
"text": "This work introduces the CASCADE error correction protocol and LDPC (Low-Density Parity Check) error correction codes which are both parity check based. We also give the results of computer simulations that are performed for comparing their performances (redundant information, success).",
"title": ""
},
{
"docid": "061face2272a6c5a31c6fca850790930",
"text": "Antibiotic feeding studies were conducted on the firebrat,Thermobia domestica (Zygentoma, Lepismatidae) to determine if the insect's gut cellulases were of insect or microbial origin. Firebrats were fed diets containing either nystatin, metronidazole, streptomycin, tetracycline, or an antibiotic cocktail consisting of all four antibiotics, and then their gut microbial populations and gut cellulase levels were monitored and compared with the gut microbial populations and gut cellulase levels in firebrats feeding on antibiotic-free diets. Each antibiotic significantly reduced the firebrat's gut micro-flora. Nystatin reduced the firebrat's viable gut fungi by 89%. Tetracycline and the antibiotic cocktail reduced the firebrat's viable gut bacteria by 81% and 67%, respectively, and metronidazole, streptomycin, tetracycline, and the antibiotic cocktail reduced the firebrat's total gut flora by 35%, 32%, 55%, and 64%, respectively. Although antibiotics significantly reduced the firebrat's viable and total gut flora, gut cellulase levels in firebrats fed antibiotics were not significantly different from those in firebrats on an antibiotic-free diet. Furthermore, microbial populations in the firebrat's gut decreased significantly over time, even in firebrats feeding on the antibiotic-free diet, without corresponding decreases in gut cellulase levels. Based on this evidence, we conclude that the gut cellulases of firebrats are of insect origin. This conclusion implies that symbiont-independent cellulose digestion is a primitive trait in insects and that symbiont-mediated cellulose digestion is a derived condition.",
"title": ""
},
{
"docid": "446c1bf541dbed56f8321b8024391b8c",
"text": "Tokenisation has been adopted by the payment industry as a method to prevent Personal Account Number (PAN) compromise in EMV (Europay MasterCard Visa) transactions. The current architecture specified in EMV tokenisation requires online connectivity during transactions. However, it is not always possible to have online connectivity. We identify three main scenarios where fully offline transaction capability is considered to be beneficial for both merchants and consumers. Scenarios include making purchases in locations without online connectivity, when a reliable connection is not guaranteed, and when it is cheaper to carry out offline transactions due to higher communication/payment processing costs involved in online approvals. In this study, an offline contactless mobile payment protocol based on EMV tokenisation is proposed. The aim of the protocol is to address the challenge of providing secure offline transaction capability when there is no online connectivity on either the mobile or the terminal. The solution also provides end-to-end encryption to provide additional security for transaction data other than the token. The protocol is analysed against protocol objectives and we discuss how the protocol can be extended to prevent token relay attacks. The proposed solution is subjected to mechanical formal analysis using Scyther. Finally, we implement the protocol and obtain performance measurements.",
"title": ""
},
{
"docid": "b4103e5ddc58672334b66cc504dab5a6",
"text": "An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as DUPLICATE and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone.",
"title": ""
},
{
"docid": "6ed1132aa216e15fe54e8524c9a4f8ee",
"text": "CONTEXT\nWith ageing populations, the prevalence of dementia, especially Alzheimer's disease, is set to soar. Alzheimer's disease is associated with progressive cerebral atrophy, which can be seen on MRI with high resolution. Longitudinal MRI could track disease progression and detect neurodegenerative diseases earlier to allow prompt and specific treatment. Such use of MRI requires accurate understanding of how brain changes in normal ageing differ from those in dementia.\n\n\nSTARTING POINT\nRecently, Henry Rusinek and colleagues, in a 6-year longitudinal MRI study of initially healthy elderly subjects, showed that an increased rate of atrophy in the medial temporal lobe predicted future cognitive decline with a specificity of 91% and sensitivity of 89% (Radiology 2003; 229: 691-96). WHERE NEXT? As understanding of neurodegenerative diseases increases, specific disease-modifying treatments might become available. Serial MRI could help to determine the efficacy of such treatments, which would be expected to slow the rate of atrophy towards that of normal ageing, and might also detect the onset of neurodegeneration. The amount and pattern of excess atrophy might help to predict the underlying pathological process, allowing specific therapies to be started. As the precision of imaging improves, the ability to distinguish healthy ageing from degenerative dementia should improve.",
"title": ""
},
{
"docid": "a40d11652a42ac6a6bf4368c9665fb3b",
"text": "This paper presents a taxonomy of intrusion detection systems that is then used to survey and classify a number of research prototypes. The taxonomy consists of a classification first of the detection principle, and second of certain operational aspects of the intrusion detection system as such. The systems are also grouped according to the increasing difficulty of the problem they attempt to address. These classifications are used predictively, pointing towards a number of areas of future research in the field of intrusion detection.",
"title": ""
},
{
"docid": "00309e5119bb0de1d7b2a583b8487733",
"text": "In this paper, we propose a novel Deep Reinforcement Learning framework for news recommendation. Online personalized news recommendation is a highly challenging problem due to the dynamic nature of news features and user preferences. Although some online recommendation models have been proposed to address the dynamic nature of news recommendation, these methods have three major issues. First, they only try to model current reward (e.g., Click Through Rate). Second, very few studies consider to use user feedback other than click / no click labels (e.g., how frequent user returns) to help improve recommendation. Third, these methods tend to keep recommending similar news to users, which may cause users to get bored. Therefore, to address the aforementioned challenges, we propose a Deep Q-Learning based recommendation framework, which can model future reward explicitly. We further consider user return pattern as a supplement to click / no click label in order to capture more user feedback information. In addition, an effective exploration strategy is incorporated to find new attractive news for users. Extensive experiments are conducted on the offline dataset and online production environment of a commercial news recommendation application and have shown the superior performance of our methods.",
"title": ""
},
{
"docid": "b6b9e1eaf17f6cdbc9c060e467021811",
"text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.",
"title": ""
},
{
"docid": "e4dbca720626a29f60a31ed9d22c30aa",
"text": "Text classification is the process of classifying documents into predefined categories based on their content. It is the automated assignment of natural language texts to predefined categories. Text classification is the primary requirement of text retrieval systems, which retrieve texts in response to a user query, and text understanding systems, which transform text in some way such as producing summaries, answering questions or extracting data. Existing supervised learning algorithms to automatically classify text need sufficient documents to learn accurately. This paper presents a new algorithm for text classification using data mining that requires fewer documents for training. Instead of using words, word relation i.e. association rules from these words is used to derive feature set from pre-classified text documents. The concept of Naïve Bayes classifier is then used on derived features and finally only a single concept of Genetic Algorithm has been added for final classification. A system based on the proposed algorithm has been implemented and tested. The experimental results show that the proposed system works as a successful text classifier.",
"title": ""
},
{
"docid": "63c6c060e398ffaf7203edd30951f574",
"text": "Mycorrhizal networks, defined as a common mycorrhizal mycelium linking the roots of at least two plants, occur in all major terrestrial ecosystems. This review discusses the recent progress and challenges in our understanding of the characteristics, functions, ecology and models of mycorrhizal networks, with the goal of encouraging future research to improve our understanding of their ecology, adaptability and evolution. We focus on four themes in the recent literature: (1) the physical, physiological and molecular evidence for the existence of mycorrhizal networks, as well as the genetic characteristics and topology of networks in natural ecosystems; (2) the types, amounts and mechanisms of interplant material transfer (including carbon, nutrients, water, defence signals and allelochemicals) in autotrophic, mycoheterotrophic or partial mycoheterotrophic plants, with particular focus on carbon transfer; (3) the influence of mycorrhizal networks on plant establishment, survival and growth, and the implications for community diversity or stability in response to environmental stress; and (4) insights into emerging methods for modelling the spatial configuration and temporal dynamics of mycorrhizal networks, including the inclusion of mycorrhizal networks in conceptual models of complex adaptive systems. We suggest that mycorrhizal networks are fundamental agents of complex adaptive systems (ecosystems) because they provide avenues for feedbacks and cross-scale interactions that lead to selforganization and emergent properties in ecosystems. We have found that research in the genetics of mycorrhizal networks has accelerated rapidly in the past 5 y with increasing resolution and throughput of molecular tools, but there still remains a large gap between understanding genes and understanding the physiology, ecology and evolution of mycorrhizal networks in our changing environment. There is now enormous and exciting potential for mycorrhizal researchers to address these higher level questions and thus inform ecosystem and evolutionary research more broadly. a 2012 The British Mycological Society. Published by Elsevier Ltd. All rights reserved. 5; fax: þ1 604 822 9102. ca (S. W. Simard), [email protected] (K. J. Beiler), [email protected] ch.co.nz (J. R. Deslippe), [email protected] (L. J. Philip), [email protected] ritish Mycological Society. Published by Elsevier Ltd. All rights reserved. 40 S. W. Simard et al.",
"title": ""
},
{
"docid": "3ea35f018869f02209105200f78d03b4",
"text": "We address the problem of spectrum pricing in a cognitive radio network where multiple primary service providers compete with each other to offer spectrum access opportunities to the secondary users. By using an equilibrium pricing scheme, each of the primary service providers aims to maximize its profit under quality of service (QoS) constraint for primary users. We formulate this situation as an oligopoly market consisting of a few firms and a consumer. The QoS degradation of the primary services is considered as the cost in offering spectrum access to the secondary users. For the secondary users, we adopt a utility function to obtain the demand function. With a Bertrand game model, we analyze the impacts of several system parameters such as spectrum substitutability and channel quality on the Nash equilibrium (i.e., equilibrium pricing adopted by the primary services). We present distributed algorithms to obtain the solution for this dynamic game. The stability of the proposed dynamic game algorithms in terms of convergence to the Nash equilibrium is studied. However, the Nash equilibrium is not efficient in the sense that the total profit of the primary service providers is not maximized. An optimal solution to gain the highest total profit can be obtained. A collusion can be established among the primary services so that they gain higher profit than that for the Nash equilibrium. However, since one or more of the primary service providers may deviate from the optimal solution, a punishment mechanism may be applied to the deviating primary service provider. A repeated game among primary service providers is formulated to show that the collusion can be maintained if all of the primary service providers are aware of this punishment mechanism, and therefore, properly weight their profits to be obtained in the future.",
"title": ""
},
{
"docid": "eae289c213d5b67d91bb0f461edae7af",
"text": "China has made remarkable progress in its war against poverty since the launching of economic reform in the late 1970s. This paper examines some of the major driving forces of poverty reduction in China. Based on time series and cross-sectional provincial data, the determinants of rural poverty incidence are estimated. The results show that economic growth is an essential and necessary condition for nationwide poverty reduction. It is not, however, a sufficient condition. While economic growth played a dominant role in reducing poverty through the mid-1990s, its impacts has diminished since that time. Beyond general economic growth, growth in specific sectors of the economy is also found to reduce poverty. For example, the growth the agricultural sector and other pro-rural (vs urban-biased) development efforts can also have significant impacts on rural poverty. Notwithstanding the record of the past, our paper is consistent with the idea that poverty reduction in the future will need to rely on more than broad-based growth and instead be dependent on pro-poor policy interventions (such as national poverty alleviation programs) that can be targeted at the poor, trying to directly help the poor to increase their human capital and incomes. Determinants of Rural Poverty Reduction and Pro-poor Economic Growth in China",
"title": ""
},
{
"docid": "748b470bfbd62b5ddf747e3ef989e66d",
"text": "Purpose – This paper sets out to integrate research on knowledge management with the dynamic capabilities approach. This paper will add to the understanding of dynamic capabilities by demonstrating that dynamic capabilities can be seen as composed of concrete and well-known knowledge management activities. Design/methodology/approach – This paper is based on a literature review focusing on key knowledge management processes and activities as well as the concept of dynamic capabilities, the paper connects these two approaches. The analysis is centered on knowledge management activities which then are compiled into dynamic capabilities. Findings – In the paper eight knowledge management activities are identified; knowledge creation, acquisition, capture, assembly, sharing, integration, leverage, and exploitation. These activities are assembled into the three dynamic capabilities of knowledge development, knowledge (re)combination, and knowledge use. The dynamic capabilities and the associated knowledge management activities create flows to and from the firm’s stock of knowledge and they support the creation and use of organizational capabilities. Practical implications – The findings in the paper demonstrate that the somewhat elusive concept of dynamic capabilities can be untangled through the use of knowledge management activities. Practicing managers struggling with the operationalization of dynamic capabilities should instead focus on the contributing knowledge management activities in order to operationalize and utilize the concept of dynamic capabilities. Originality/value – The paper demonstrates that the existing research on knowledge management can be a key contributor to increasing our understanding of dynamic capabilities. This finding is valuable for both researchers and practitioners.",
"title": ""
},
{
"docid": "641049f7bdf194b3c326298c5679c469",
"text": "Acknowledgements Research in areas where there are many possible paths to follow requires a keen eye for crucial issues. The study of learning systems is such an area. Through the years of working with Andy Barto and Rich Sutton, I have observed many instances of \" fluff cutting \" and the exposure of basic issues. I thank both Andy and Rich for the insights that have rubbed off on me. I also thank Andy for opening up an infinite world of perspectives on learning, ranging from engineering principles to neural processing theories. I thank Rich for showing me the most important step in doing \" science \" —simplify your questions by isolating the issues. Several people contributed to the readability of this dissertation. Andy spent much time carefully reading several drafts. Through his efforts the clarity is much improved. I thank Paul Utgoff, Michael Arbib, and Bill Kilmer for reading drafts of this dissertation and providing valuable criticisms. Paul provided a non-connectionist perspective that widened my view considerably. He never hesitated to work out differences in terms and methodologies that have been developed through research with connectionist vs. symbolic representations. I thank for commenting on an early draft and for many interesting discussions. and the AFOSR for starting and maintaining the research project that supported the work reported in this dis-sertation. I thank Susan Parker for the skill with which she administered the project. And I thank the COINS Department at UMass and the RCF Staff for the maintenance of the research computing environment. Much of the computer graphics software used to generate figures of this dissertation is based on graphics tools provided by Rich Sutton and Andy Cromarty. Most importantly, I thank Stacey and Joseph for always being there to lift my spirits while I pursued distant milestones and to share my excitement upon reaching them. Their faith and confidence helped me maintain a proper perspective. The difficulties of learning in multilayered networks of computational units has limited the use of connectionist systems in complex domains. This dissertation elucidates the issues of learning in a network's hidden units, and reviews methods for addressing these issues that have been developed through the years. Issues of learning in hidden units are shown to be analogous to learning issues for multilayer systems employing symbolic representations. Comparisons of a number of algorithms for learning in hidden units are made by applying them in …",
"title": ""
},
{
"docid": "df2bc3dce076e3736a195384ae6c9902",
"text": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.",
"title": ""
},
{
"docid": "8a6a5f02a399865afbbad607fd720d00",
"text": "Estimating entropy and mutual information consistently is important for many machine learning applications. The Kozachenko-Leonenko (KL) estimator ( Kozachenko & Leonenko , 1987) is a widely used nonparametric estimator for the entropy of multivariate continuous random variables, as well as the basis of the mutual information estimator ofKraskov et al.(2004), perhaps the most widely used estimator of mutual information in this setting. Despite the practical importance of these estimators, major theoretical questions regarding their finite-sample behavior remain open. This paper proves finite-sample bounds on the bias and variance of the KL estimator, showing that it achieves the minimax convergence rate for certain classes of smooth functions. In proving these bounds, we analyze finitesample behavior of k-nearest neighbors ( k-NN) distance statistics (on which the KL estimator is based). We derive concentration inequalities for k-NN distances and a general expectation bound for statistics ofk-NN distances, which may be useful for other analyses of k-NN methods.",
"title": ""
},
{
"docid": "b402c0f2ec478cddaf202c2cfa09d966",
"text": "This paper describes a framework for building story traces (compact global views of a narrative) and story projections (selections of key story elements) and their applications in digital storytelling. Word and sense properties are extracted using the WordNet lexical database enhanced with Prolog inference rules and a number of lexical transformations. Inference rules are based on navigation in various WordNet relation chains (hypernyms, meronyms, entailment and causality links, etc.) and derived inferential closures expressed as boolean combinations of node and edge properties used to direct the navigation. The resulting abstract story traces provide a compact view of the underlying story’s key content elements and a means for automated indexing and classification of story collections [1, 2]. Ontology driven projections act as a kind of “semantic lenses” and provide a means to select a subset of a story whose key sense elements are subsumed by a set of concepts, predicates and properties expressing the focus of interest of a user. Finally, we discuss applications of these techniques in story understanding, classification of digital story collections, story generation and story-related question answering. The main contribution of the paper consists in the use of a lexical knowledge base together with an advanced rule based inference mechanism for understanding stories, and the use of the information extracted by this process for various applications.",
"title": ""
},
{
"docid": "b58c11596d8364108a9d887382237c01",
"text": "This paper discusses the phenomenon of root infinitives (RIs) in child language, focussing on a distributional restriction on the verbs that occur in this construction, viz. event-denoting verbs, as well as on a related aspect of interpretation, viz. that RIs receive modal interpretations. The modality of the construction is traced to the infinitival morphology, while the eventivity restriction is derived from the modal meaning. In contrast, the English bare form, which is often taken to instantiate the RI-phenomenon, does not seem to be subject to the eventivity constraint, nor do we find a modal reference effect. This confirms the analysis, which traces these to the infinitival morphology itself, which is absent in English. The approach not only provides a precise characterization of the distribution of the RI-phenomenon within and across languages; it also explains differences between the English bare form phenomenon and the RI-construction in languages with genuine infinitives by reference to the morphosyntax of the languages involved. The fact that children appear to be sensitive to these distinctions in the target systems at such an early age supports the general thesis of Early Morphosyntactic Convergence, which the authors argue is a pervasive property of the acquisition process. Keywords; Syntax; Acquisition; Root infinitives; Eventivity; Modality",
"title": ""
}
] |
scidocsrr
|
7c8401c55239df878548d668281024e4
|
The Problem of Trusted Third Party in Authentication and Digital Signature Protocols
|
[
{
"docid": "59308c5361d309568a94217c79cf0908",
"text": "Want to get experience? Want to get any ideas to create new things in your life? Read cryptography an introduction to computer security now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.",
"title": ""
}
] |
[
{
"docid": "32d0a26f21a25fe1e783b1edcfbcf673",
"text": "Histologic grading has been used as a guide for clinical management in follicular lymphoma (FL). Proliferation index (PI) of FL generally correlates with tumor grade; however, in cases of discordance, it is not clear whether histologic grade or PI correlates with clinical aggressiveness. To objectively evaluate these cases, we determined PI by Ki-67 immunostaining in 142 cases of FL (48 grade 1, 71 grade 2, and 23 grade 3). A total of 24 cases FL with low histologic grade but high PI (LG-HPI) were identified, a frequency of 18%. On histologic examination, LG-HPI FL often exhibited blastoid features. Patients with LG-HPI FL had inferior disease-specific survival but a higher 5-year disease-free rate than low-grade FL with concordantly low PI (LG-LPI). However, transformation to diffuse large B-cell lymphoma was uncommon in LG-HPI cases (1 of 19; 5%) as compared with LG-LPI cases (27 of 74; 36%). In conclusion, LG-HPI FL appears to be a subgroup of FL with clinical behavior more akin to grade 3 FL. We propose that these LG-HPI FL cases should be classified separately from cases of low histologic grade FL with concordantly low PI.",
"title": ""
},
{
"docid": "f10353fe0c78877a6e78509badba9fcd",
"text": "Chronic Wounds are ulcers presenting a difficult or nearly interrupted cicatrization process that increase the risk of complications to the health of patients, like amputation and infections. This research proposes a general noninvasive methodology for the segmentation and analysis of chronic wounds images by computing the wound areas affected by necrosis. Invasive techniques are usually used for this calculation, such as manual planimetry with plastic films. We investigated algorithms to perform the segmentation of wounds as well as the use of several convolutional networks for classifying tissue as Necrotic, Granulation or Slough. We tested four architectures: U-Net, Segnet, FCN8 and FCN32, and proposed a color space reduction methodology that increased the reported accuracies, specificities, sensitivities and Dice coefficients for all 4 networks, achieving very good levels.",
"title": ""
},
{
"docid": "b32d6bc2d14683c4bf3557dad560edca",
"text": "In this paper, we describe the fabrication and testing of a stretchable fabric sleeve with embedded elastic strain sensors for state reconstruction of a soft robotic joint. The strain sensors are capacitive and composed of graphite-based conductive composite electrodes and a silicone elastomer dielectric. The sensors are screenprinted directly into the fabric sleeve, which contrasts the approach of pre-fabricating sensors and subsequently attaching them to a host. We demonstrate the capabilities of the sensor-embedded fabric sleeve by determining the joint angle and end effector position of a soft pneumatic joint with similar accuracy to a traditional IMU. Furthermore, we show that the sensory sleeve is capable of capturing more complex material states, such as fabric buckling and non-constant curvatures along linkages and joints.",
"title": ""
},
{
"docid": "999070b182a328b1927be4575f04e434",
"text": "Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.",
"title": ""
},
{
"docid": "df6d4e6d74d96b7ab1951cc869caad59",
"text": "A broadband commonly fed antenna with dual polarization is proposed in this letter. The main radiator of the antenna is designed as a loop formed by four staircase-like branches. In this structure, the 0° polarization and 90° polarization share the same radiator and reflector. Measurement shows that the proposed antenna obtains a broad impedance bandwidth of 70% (1.5–3.1 GHz) with <inline-formula><tex-math notation=\"LaTeX\">$\\vert {{S}}_{11}\\vert < -{\\text{10 dB}}$</tex-math></inline-formula> and a high port-to-port isolation of 35 dB. The antenna gain within the operating frequency band is between 7.2 and 9.5 dBi, which indicates a stable broadband radiation performance. Moreover, a high cross-polarization discrimination of 25 dB is achieved across the whole operating frequency band.",
"title": ""
},
{
"docid": "04d5824991ada6194f3028a900d7f31b",
"text": "In this work, we present a solution to real-time monocular dense mapping. A tightly-coupled visual-inertial localization module is designed to provide metric and high-accuracy odometry. A motion stereo algorithm is proposed to take the video input from one camera to produce local depth measurements with semi-global regularization. The local measurements are then integrated into a global map for noise filtering and map refinement. The global map obtained is able to support navigation and obstacle avoidance for aerial robots through our indoor and outdoor experimental verification. Our system runs at 10Hz on an Nvidia Jetson TX1 by properly distributing computation to CPU and GPU. Through onboard experiments, we demonstrate its ability to close the perception-action loop for autonomous aerial robots. We release our implementation as open-source software1.",
"title": ""
},
{
"docid": "e294307ea4108d8cf467585f27d3a48b",
"text": "Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs) to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.",
"title": ""
},
{
"docid": "ade9860157680b2ca6820042f0cda302",
"text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &",
"title": ""
},
{
"docid": "40ebf37907d738dd64b5a87b93b4a432",
"text": "Deep learning has led to many breakthroughs in machine perception and data mining. Although there are many substantial advances of deep learning in the applications of image recognition and natural language processing, very few work has been done in video analysis and semantic event detection. Very deep inception and residual networks have yielded promising results in the 2014 and 2015 ILSVRC challenges, respectively. Now the question is whether these architectures are applicable to and computationally reasonable in a variety of multimedia datasets. To answer this question, an efficient and lightweight deep convolutional network is proposed in this paper. This network is carefully designed to decrease the depth and width of the state-of-the-art networks while maintaining the high-performance. The proposed deep network includes the traditional convolutional architecture in conjunction with residual connections and very light inception modules. Experimental results demonstrate that the proposed network not only accelerates the training procedure, but also improves the performance in different multimedia classification tasks.",
"title": ""
},
{
"docid": "bc5c008b5e443b83b2a66775c849fffb",
"text": "Continuous glucose monitoring (CGM) sensors are portable devices that allow measuring and visualizing the glucose concentration in real time almost continuously for several days and are provided with hypo/hyperglycemic alerts and glucose trend information. CGM sensors have revolutionized Type 1 diabetes (T1D) management, improving glucose control when used adjunctively to self-monitoring blood glucose systems. Furthermore, CGM devices have stimulated the development of applications that were impossible to create without a continuous-time glucose signal, e.g., real-time predictive alerts of hypo/hyperglycemic episodes based on the prediction of future glucose concentration, automatic basal insulin attenuation methods for hypoglycemia prevention, and the artificial pancreas. However, CGM sensors' lack of accuracy and reliability limited their usability in the clinical practice, calling upon the academic community for the development of suitable signal processing methods to improve CGM performance. The aim of this paper is to review the past and present algorithmic challenges of CGM sensors, to show how they have been tackled by our research group, and to identify the possible future ones.",
"title": ""
},
{
"docid": "f8b201105e3b92ed4ef2a884cb626c0d",
"text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.",
"title": ""
},
{
"docid": "712be4d6aabf8e76b050c30e6241ad0f",
"text": "The United States, like many nations, continues to experience rapid growth in its racial minority population and is projected to attain so-called majority-minority status by 2050. Along with these demographic changes, staggering racial disparities persist in health, wealth, and overall well-being. In this article, we review the social psychological literature on race and race relations, beginning with the seemingly simple question: What is race? Drawing on research from different fields, we forward a model of race as dynamic, malleable, and socially constructed, shifting across time, place, perceiver, and target. We then use classic theoretical perspectives on intergroup relations to frame and then consider new questions regarding contemporary racial dynamics. We next consider research on racial diversity, focusing on its effects during interpersonal encounters and for groups. We close by highlighting emerging topics that should top the research agenda for the social psychology of race and race relations in the twenty-first century.",
"title": ""
},
{
"docid": "1d56b3aa89484e3b25557880ec239930",
"text": "We present an FPGA accelerator for the Non-uniform Fast Fourier Transform, which is a technique to reconstruct images from arbitrarily sampled data. We accelerate the compute-intensive interpolation step of the NuFFT Gridding algorithm by implementing it on an FPGA. In order to ensure efficient memory performance, we present a novel FPGA implementation for Geometric Tiling based sorting of the arbitrary samples. The convolution is then performed by a novel Data Translation architecture which is composed of a multi-port local memory, dynamic coordinate-generator and a plug-and-play kernel pipeline. Our implementation is in single-precision floating point and has been ported onto the BEE3 platform. Experimental results show that our FPGA implementation can generate fairly high performance without sacrificing flexibility for various data-sizes and kernel functions. We demonstrate up to 8X speedup and up to 27 times higher performance-per-watt over a comparable CPU implementation and up to 20% higher performance-per-watt when compared to a relevant GPU implementation.",
"title": ""
},
{
"docid": "6504562f140b49d412446817e76383e8",
"text": "As more businesses realized that data, in all forms and sizes, is critical to making the best possible decisions, we see the continued growth of systems that support massive volume of non-relational or unstructured forms of data. Nothing shows the picture more starkly than the Gartner Magic quadrant for operational database management systems, which assumes that, by 2017, all leading operational DBMSs will offer multiple data models, relational and NoSQL, in a single DBMS platform. Having a single data platform for managing both well-structured data and NoSQL data is beneficial to users; this approach reduces significantly integration, migration, development, maintenance, and operational issues. Therefore, a challenging research work is how to develop efficient consolidated single data management platform covering both relational data and NoSQL to reduce integration issues, simplify operations, and eliminate migration issues. In this tutorial, we review the previous work on multi-model data management and provide the insights on the research challenges and directions for future work. The slides and more materials of this tutorial can be found at http://udbms.cs.helsinki.fi/?tutorials/edbt2017.",
"title": ""
},
{
"docid": "660465cbd4bd95108a2381ee5a97cede",
"text": "In this paper we discuss the design and implementation of an automated usability evaluation method for iOS applications. In contrast to common usability testing methods, it is not explicitly necessary to involve an expert or subjects. These circumstances reduce costs, time and personnel expenditures. Professionals are replaced by the automation tool while test participants are exchanged with consumers of the launched application. Interactions of users are captured via a fully automated capturing framework which creates a record of user interactions for each session and sends them to a central server. A usability problem is defined as a sequence of interactions and pattern recognition specified by interaction design patterns is applied to find these problems. Nevertheless, it falls back to the user input for accurate results. Similar to the problem, the solution of the problem is based on the HCI design pattern. An evaluation shows the functionality of our approach compared to a traditional usability evaluation method.",
"title": ""
},
{
"docid": "7d1faee4929d60d952cc8c2c12fa16d3",
"text": "We recently showed that improved perceptual performance on a visual motion direction–discrimination task corresponds to changes in how an unmodified sensory representation in the brain is interpreted to form a decision that guides behavior. Here we found that these changes can be accounted for using a reinforcement-learning rule to shape functional connectivity between the sensory and decision neurons. We modeled performance on the basis of the readout of simulated responses of direction-selective sensory neurons in the middle temporal area (MT) of monkey cortex. A reward prediction error guided changes in connections between these sensory neurons and the decision process, first establishing the association between motion direction and response direction, and then gradually improving perceptual sensitivity by selectively strengthening the connections from the most sensitive neurons in the sensory population. The results suggest a common, feedback-driven mechanism for some forms of associative and perceptual learning.",
"title": ""
},
{
"docid": "3eec1e9abcb677a4bc8f054fa8827f4f",
"text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.",
"title": ""
},
{
"docid": "0c025ec05a1f98d71c9db5bfded0a607",
"text": "Many organizations, such as banks, airlines, telecommunications companies, and police departments, routinely use queueing models to help determine capacity levels needed to respond to experienced demands in a timely fashion. Though queueing analysis has been used in hospitals and other healthcare settings, its use in this sector is not widespread. Yet, given the pervasiveness of delays in healthcare and the fact that many healthcare facilities are trying to meet increasing demands with tightly constrained resources, queueing models can be very useful in developing more effective policies for bed allocation and staffing, and in identifying other opportunities for improving service. Queueing analysis is also a key tool in estimating capacity requirements for possible future scenarios, including demand surges due to new diseases or acts of terrorism. This chapter describes basic queueing models as well as some simple modifications and extensions that are particularly useful in the healthcare setting, and give examples of their use. The critical issue of data requirements is also be discussed as well as model choice, modelbuilding and the interpretation and use of results.",
"title": ""
},
{
"docid": "f5e934d65fa436cdb8e5cfa81ea29028",
"text": "Recently, there has been substantial research on augmenting aggregate forecasts with individual consumer data from internet platforms, such as search traffic or social network shares. Although the majority of studies report increased accuracy, many exhibit design weaknesses including lack of adequate benchmarks or rigorous evaluation. Furthermore, their usefulness over the product life-cycle has not been investigated, which may change, as initially, consumers may search for pre-purchase information, but later for after-sales support. In this study, we first review the relevant literature and then attempt to support the key findings using two forecasting case studies. Our findings are in stark contrast to the literature, and we find that established univariate forecasting benchmarks, such as exponential smoothing, consistently perform better than when online information is included. Our research underlines the need for thorough forecast evaluation and argues that online platform data may be of limited use for supporting operational decisions.",
"title": ""
},
{
"docid": "188e971e34192af93c36127b69d89064",
"text": "1 1 This paper has been revised and extended from the authors' previous work [23][24][25]. ABSTRACT Ontology mapping seeks to find semantic correspondences between similar elements of different ontologies. It is a key challenge to achieve semantic interoperability in building the Semantic Web. This paper proposes a new generic and adaptive ontology mapping approach, called the PRIOR+, based on propagation theory, information retrieval techniques and artificial intelligence. The approach consists of three major modules, i.e., the IR-based similarity generator, the adaptive similarity filter and weighted similarity aggregator, and the neural network based constraint satisfaction solver. The approach first measures both linguistic and structural similarity of ontologies in a vector space model, and then aggregates them using an adaptive method based on their harmonies, which is defined as an estimator of performance of similarity. Finally to improve mapping accuracy the interactive activation and competition neural network is activated, if necessary, to search for a solution that can satisfy ontology constraints. The experimental results show that harmony is a good estimator of f-measure; the harmony based adaptive aggregation outperforms other aggregation methods; neural network approach significantly boosts the performance in most cases. Our approach is competitive with top ranked systems on benchmark tests at OAEI campaign 2007, and performs the best on real cases in OAEI benchmark tests.",
"title": ""
}
] |
scidocsrr
|
5edf85680d1e77a148f69ad7d261b6c2
|
Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning
|
[
{
"docid": "28ee32149227e4a26bea1ea0d5c56d8c",
"text": "We consider an agent’s uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into exploration bonuses and obtain significantly improved exploration in a number of hard games, including the infamously difficult MONTEZUMA’S REVENGE.",
"title": ""
},
{
"docid": "771611dc99e22b054b936fce49aea7fc",
"text": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.",
"title": ""
},
{
"docid": "c0d7b92c1b88a2c234eac67c5677dc4d",
"text": "To appear in G Tesauro D S Touretzky and T K Leen eds Advances in Neural Information Processing Systems MIT Press Cambridge MA A straightforward approach to the curse of dimensionality in re inforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neu ral net Although this has been successful in the domain of backgam mon there is no guarantee of convergence In this paper we show that the combination of dynamic programming and function approx imation is not robust and in even very benign cases may produce an entirely wrong policy We then introduce Grow Support a new algorithm which is safe from divergence yet can still reap the bene ts of successful generalization",
"title": ""
}
] |
[
{
"docid": "f83bf92a38f1ce7734a5c1abce65f92f",
"text": "This paper presents an Adaptive fuzzy logic PID controller for speed control of Brushless Direct current Motor drives which is widely used in various industrial systems, such as servo motor drives, medical, automobile and aerospace industry. BLDC motors were electronically commutated motor offer many advantages over Brushed DC Motor which includes increased efficiency, longer life, low volume and high torque. This paper presents an overview of performance of fuzzy PID controller and Adaptive fuzzy PID controller using Simulink model. Tuning Parameters and computing using Normal PID controller is difficult and also it does not give satisfied control characteristics when compare to Adaptive Fuzzy PID controller. From the Simulation results we verify that Adaptive Fuzzy PID controller give better control performance when compared to fuzzy PID controller. The software Package SIMULINK was used in control and Modelling of BLDC Motor.",
"title": ""
},
{
"docid": "2d9921e49e58725c9c85da02249c8d27",
"text": "Recently, the performance of Si power devices gradually approaches the physical limit, and the latest SiC device seemingly has the ability to substitute the Si insulated gate bipolar transistor (IGBT) in 1200 V class. In this paper, we demonstrate the feasibility of further improving the Si IGBT based on the new concept of CSTBTtrade. In point of view of low turn-off loss and high uniformity in device characteristics, we employ the techniques of fine-pattern and retro grade doping in the design of new device structures, resulting in significant reduction on the turn-off loss and the VGE(th) distribution, respectively.",
"title": ""
},
{
"docid": "0be24a284a7490b709bbbdfea458b211",
"text": "This article provides a meta-analytic review of the relationship between the quality of leader-member exchanges (LMX) and citizenship behaviors performed by employees. Results based on 50 independent samples (N = 9,324) indicate a moderately strong, positive relationship between LMX and citizenship behaviors (rho = .37). The results also support the moderating role of the target of the citizenship behaviors on the magnitude of the LMX-citizenship behavior relationship. As expected, LMX predicted individual-targeted behaviors more strongly than it predicted organizational targeted behaviors (rho = .38 vs. rho = .31), and the difference was statistically significant. Whether the LMX and the citizenship behavior ratings were provided by the same source or not also influenced the magnitude of the correlation between the 2 constructs.",
"title": ""
},
{
"docid": "d36c3839127ecee4f22e846a91b32d6c",
"text": "Michelangelo Buonarroti (1475-1564) was a master anatomist as well as an artistic genius. He dissected numerous cadavers and developed a profound understanding of human anatomy. Among his best-known artworks are the frescoes painted on the ceiling of the Sistine Chapel (1508-1512), in Rome. Currently, there is some debate over whether the frescoes merely represent the teachings of the Catholic Church at the time or if there are other meanings hidden in the images. In addition, there is speculation regarding the image of the brain embedded in the fresco known as \"The Creation of Adam,\" which contains anatomic features of the midsagittal and lateral surfaces of the brain. Within this context, we report our use of Image Pro Plus Software 6.0 to demonstrate mathematical evidence that Michelangelo painted \"The Creation of Adam\" using the Divine Proportion/Golden Ratio (GR) (1.6). The GR is classically associated with greater structural efficiency and is found in biological structures and works of art by renowned artists. Thus, according to the evidence shown in this article, we can suppose that the beauty and harmony recognized in all Michelangelo's works may not be based solely on his knowledge of human anatomical proportions, but that the artist also probably knew anatomical structures that conform to the GR display greater structural efficiency. It is hoped that this report will at least stimulate further scientific and scholarly contributions to this fascinating topic, as the study of these works of art is essential for the knowledge of the history of Anatomy.",
"title": ""
},
{
"docid": "b540fb20a265d315503543a5d752f486",
"text": "Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as welldefined quantifiers of a deep network’s expressive ability to model intricate correlation structures of its inputs. Most importantly, the construction of a deep convolutional arithmetic circuit in terms of a Tensor Network is made available. This description enables us to carry a graph-theoretic analysis of a convolutional network, tying its expressiveness to a min-cut in the graph which characterizes it. Thus, we demonstrate a direct control over the inductive bias of the designed deep convolutional network via its channel numbers, which we show to be related to this min-cut in the underlying graph. This result is relevant to any practitioner designing a convolutional network for a specific task. We theoretically analyze convolutional arithmetic circuits, and empirically validate our findings on more common convolutional networks which involve ReLU activations and max pooling. Beyond the results described above, the description of a deep convolutional network in well-defined graph-theoretic tools and the formal structural connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.",
"title": ""
},
{
"docid": "2fbcd34468edf53ee08e0a76a048c275",
"text": "Recently, the introduction of the generative adversarial network (GAN) and its variants has enabled the generation of realistic synthetic samples, which has been used for enlarging training sets. Previous work primarily focused on data augmentation for semi-supervised and supervised tasks. In this paper, we instead focus on unsupervised anomaly detection and propose a novel generative data augmentation framework optimized for this task. In particular, we propose to oversample infrequent normal samples - normal samples that occur with small probability, e.g., rare normal events. We show that these samples are responsible for false positives in anomaly detection. However, oversampling of infrequent normal samples is challenging for real-world high-dimensional data with multimodal distributions. To address this challenge, we propose to use a GAN variant known as the adversarial autoencoder (AAE) to transform the high-dimensional multimodal data distributions into low-dimensional unimodal latent distributions with well-defined tail probability. Then, we systematically oversample at the 'edge' of the latent distributions to increase the density of infrequent normal samples. We show that our oversampling pipeline is a unified one: it is generally applicable to datasets with different complex data distributions. To the best of our knowledge, our method is the first data augmentation technique focused on improving performance in unsupervised anomaly detection. We validate our method by demonstrating consistent improvements across several real-world datasets.",
"title": ""
},
{
"docid": "bd178b04fe57db1ce408452edeb8a6d4",
"text": "BACKGROUND\nIn 1998, the French Ministry of Environment revealed that of 71 French municipal solid waste incinerators processing more than 6 metric tons of material per hour, dioxin emission from 15 of them was above the 10 ng international toxic equivalency factor/m3 (including Besançon, emitting 16.3 ng international toxic equivalency factor/m3) which is substantially higher than the 0.1 international toxic equivalency factor/m3 prescribed by a European directive of 1994. In 2000, a macrospatial epidemiological study undertaken in the administrative district of Doubs, identified two significant clusters of soft-tissue sarcoma and non Hodgkin lymphoma in the vicinity of the municipal solid waste incinerator of Besançon. This microspatial study (at the Besançon city scale), was designed to test the association between the exposure to dioxins emitted by the municipal solid waste incinerator of Besançon and the risk of soft-tissue sarcoma.\n\n\nMETHODS\nGround-level concentrations of dioxin were modeled with a dispersion model (Air Pollution Control 3 software). Four increasing zones of exposure were defined. For each case of soft tissue sarcoma, ten controls were randomly selected from the 1990 census database and matched for gender and age. A geographic information system allowed the attribution of a dioxin concentration category to cases and controls, according to their place of residence.\n\n\nRESULTS\nThirty-seven cases of soft tissue sarcoma were identified by the Doubs cancer registry between 1980 and 1995, corresponding to a standardized incidence (French population) of 2.44 per 100,000 inhabitants. Compared with the least exposed zone, the risk of developing a soft tissue sarcoma was not significantly increased for people living in the more exposed zones.\n\n\nCONCLUSION\nBefore definitely concluding that there is no relationship between the exposure to dioxin released by a solid waste incinerator and soft tissue sarcoma, a nationwide investigation based on other registries should be conducted.",
"title": ""
},
{
"docid": "d952de00554b9a6bb21fbce802729b3f",
"text": "In the past five years there has been a dramatic increase in work on Search Based Software Engineering (SBSE), an approach to software engineering in which search based optimisation algorithms are used to address problems in Software Engineering. SBSE has been applied to problems throughout the Software Engineering lifecycle, from requirements and project planning to maintenance and re-engineering. The approach is attractive because it offers a suite of adaptive automated and semi-automated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This paper provides a review and classification of literature on SBSE. The paper identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.",
"title": ""
},
{
"docid": "cb7e4a454d363b9cb1eb6118a4b00855",
"text": "Stream processing applications reduce the latency of batch data pipelines and enable engineers to quickly identify production issues. Many times, a service can log data to distinct streams, even if they relate to the same real-world event (e.g., a search on Facebook’s search bar). Furthermore, the logging of related events can appear on the server side with different delay, causing one stream to be significantly behind the other in terms of logged event times for a given log entry. To be able to stitch this information together with low latency, we need to be able to join two different streams where each stream may have its own characteristics regarding the degree in which its data is out-of-order. Doing so in a streaming fashion is challenging as a join operator consumes lots of memory, especially with significant data volumes. This paper describes an end-to-end streaming join service that addresses the challenges above through a streaming join operator that uses an adaptive stream synchronization algorithm that is able to handle the different distributions we observe in real-world streams regarding their event times. This synchronization scheme paces the parsing of new data and reduces overall operator memory footprint while still providing high accuracy. We have integrated this into a streaming SQL system and have successfully reduced the latency of several batch pipelines using this approach. PVLDB Reference Format: G. Jacques-Silva, R. Lei, L. Cheng, G. J. Chen, K. Ching, T. Hu, Y. Mei, K. Wilfong, R. Shetty, S. Yilmaz, A. Banerjee, B. Heintz, S. Iyer, A. Jaiswal. Providing Streaming Joins as a Service at Facebook. PVLDB, 11 (12): 1809-1821, 2018. DOI: : https://doi.org/10.14778/3229863.3229869",
"title": ""
},
{
"docid": "7753a65e07ace406d29822c9d165c83f",
"text": "A new technique is presented for matching image features to maps or models. The technique forms all possible pairs of image features and model features which match on the basis of local evidence alone. For each possible pair of matching features the parameters of an RST (rotation, scaling, and translation) transformation are derived. Clustering in the space of all possible RST parameter sets reveals a good global transformation which matches many image features to many model features. Results with a variety of data sets are presented which demonstrate that the technique does not require sophisticated feature detection and is robust with respect to changes of image orientation and content. Examples in both cartography and object detection are given.",
"title": ""
},
{
"docid": "74d2d780291e9dbf2e725b55ccadd278",
"text": "Organizational climate and organizational culture theory and research are reviewed. The article is first framed with definitions of the constructs, and preliminary thoughts on their interrelationships are noted. Organizational climate is briefly defined as the meanings people attach to interrelated bundles of experiences they have at work. Organizational culture is briefly defined as the basic assumptions about the world and the values that guide life in organizations. A brief history of climate research is presented, followed by the major accomplishments in research on the topic with regard to levels issues, the foci of climate research, and studies of climate strength. A brief overview of the more recent study of organizational culture is then introduced, followed by samples of important thinking and research on the roles of leadership and national culture in understanding organizational culture and performance and culture as a moderator variable in research in organizational behavior. The final section of the article proposes an integration of climate and culture thinking and research and concludes with practical implications for the management of effective contemporary organizations. Throughout, recommendations are made for additional thinking and research.",
"title": ""
},
{
"docid": "281e8785214bb209a142d420dfdc5f26",
"text": "This study examined achievement when podcasts were used in place of lecture in the core technology course required for all students seeking teacher licensure at a large research-intensive university in the Southeastern United States. Further, it examined the listening preferences of the podcast group and the barriers to podcast use. The results revealed that there was no significant difference in the achievement of preservice teachers who experienced podcast instruction versus those who received lecture instruction. Further, there was no significant difference in their study habits. Participants preferred to use a computer and Blackboard for downloading the podcasts, which they primarily listened to at home. They tended to like the podcasts as well as the length of the podcasts and felt that they were reasonably effective for learning. They agreed that the podcasts were easy to use but disagreed that they should be used to replace lecture. Barriers to podcast use include unfamiliarity with podcasts, technical problems in accessing and downloading podcasts, and not seeing the relevance of podcasts to their learning. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7ea2f7c549721f95e10b27af9de3d44b",
"text": "Declaration Declaration I hereby declare that except where specific reference is made to the work of others, the contents of this thesis are original and have not been submitted in whole or in part for consideration for any other degree or qualification in this, or any other university. This thesis is my own work and contains nothing, which is the outcome of work done in collaboration with others, except as specified in the text and Acknowledgements. Abstract I Abstract Nowadays, with the smart device developing and life quality improving, people's requirement of real-time, fast, accurate and smart health service has been increased. As the technology advances, E-Health Care concept has been emerging in the last decades and received extensive attention. With the help of Internet and computing technologies, a lot of E-Health Systems have been proposed that change traditional medical treatment mode to remote or online medical treatment. Furthermore, due to the rapidly development of Internet and wireless network in recent years, many enhanced E-Health Systems based on Wireless Sensor Network have been proposed that open a new research field. Sensor Network by taking the advantage of the latest technologies. The proposed E-Health System is a wireless and portable system, which consists of the Wireless E-Health Gateway and Wireless E-Health Sensor Nodes. The system has been further enhanced by Smart Technology that combined the advantages of the smart phone. The proposed system has change the mechanisms of traditional medical care and provide real-time, portable, accurate and flexible medical care services to users. With the E-Health System wieldy deployed, it requires powerful computing center to deal with the mass health record data. Cloud technology as an emerging technology has applied in the proposed system. This research has used Amazon Web Services (AWS) – Cloud Computing Services to develop a powerful, scalable and fast connection web service for proposed E-Health Management System. Abstract II The security issue is a common problem in the wireless network, and it is more important for E-Health System as the personal health data is private and should be safely transferred and storage. Hence, this research work also focused on the cryptographic algorithm to reinforce the security of E-Health System. Due to the limitations of embedded system resources, such as: lower computing, smaller battery, and less memory, which cannot support modem advance encryption standard. In this research, Rivest Cipher Version 5 (RC5) as the simple, security and software …",
"title": ""
},
{
"docid": "274829e884c6ba5f425efbdce7604108",
"text": "The Internet of Things (IoT) is constantly evolving and is giving unique solutions to the everyday problems faced by man. “Smart City” is one such implementation aimed at improving the lifestyle of human beings. One of the major hurdles in most cities is its solid waste management, and effective management of the solid waste produced becomes an integral part of a smart city. This paper aims at providing an IoT based architectural solution to tackle the problems faced by the present solid waste management system. By providing a complete IoT based system, the process of tracking, collecting, and managing the solid waste can be easily automated and monitored efficiently. By taking the example of the solid waste management crisis of Bengaluru city, India, we have come up with the overall system architecture and protocol stack to give a IoT based solution to improve the reliability and efficiency of the system. By making use of sensors, we collect data from the garbage bins and send them to a gateway using LoRa technology. The data from various garbage bins are collected by the gateway and sent to the cloud over the Internet using the MQTT (Message Queue Telemetry Transport) protocol. The main advantage of the proposed system is the use of LoRa technology for data communication which enables long distance data transmission along with low power consumption as compared to Wi-Fi, Bluetooth or Zigbee.",
"title": ""
},
{
"docid": "1a9be0a664da314c143ca430bd6f4502",
"text": "Fingerprint image quality is an important factor in the perf ormance of Automatic Fingerprint Identification Systems(AFIS). It is used to evaluate the system performance, assess enrollment acceptability, and evaluate fingerprint sensors. This paper presents a novel methodology for fingerp rint image quality measurement. We propose limited ring-wedge spectral measu r to estimate the global fingerprint image features, and inhomogeneity with d rectional contrast to estimate local fingerprint image features. Experimental re sults demonstrate the effectiveness of our proposal.",
"title": ""
},
{
"docid": "b9ca1209ce50bf527d68109dbdf7431c",
"text": "The MATLAB model of the analog multiplier based on the sigma delta modulation is developed. Different modes of multiplier are investigated and obtained results are compared with analytical results.",
"title": ""
},
{
"docid": "99faeab3adcf89a3f966b87547cea4e7",
"text": "In-service structural health monitoring of composite aircraft structures plays a key role in the assessment of their performance and integrity. In recent years, Fibre Optic Sensors (FOS) have proved to be a potentially excellent technique for real-time in-situ monitoring of these structures due to their numerous advantages, such as immunity to electromagnetic interference, small size, light weight, durability, and high bandwidth, which allows a great number of sensors to operate in the same system, and the possibility to be integrated within the material. However, more effort is still needed to bring the technology to a fully mature readiness level. In this paper, recent research and applications in structural health monitoring of composite aircraft structures using FOS have been critically reviewed, considering both the multi-point and distributed sensing techniques.",
"title": ""
},
{
"docid": "e0301bf133296361b4547730169d2672",
"text": "Radar warning receivers (RWRs) classify the intercepted pulses into clusters utilizing multiple parameter deinterleaving. In order to make classification more elaborate time-of-arrival (TOA) deinterleaving should be performed for each cluster. In addition, identification of the classified pulse sequences has been exercised at last. It is essential to identify the classified sequences with a minimum number of pulses. This paper presents a method for deinterleaving of intercepted signals having small number of pulses that belong to stable or jitter pulse repetition interval (PRI) types in the presence of missed pulses. It is necessary for both stable and jitter PRI TOA deinterleaving algorithms to utilize predefined PRI range. However, jitter PRI TOA deinterleaving also requires variation about mean PRI value of emitter of interest as a priori.",
"title": ""
},
{
"docid": "0cb0c5f181ef357cd81d4a290d2cbc14",
"text": "With 3D sensing becoming cheaper, environment-aware and visually-guided robot arms capable of safely working in collaboration with humans will become common. However, a reliable calibration is needed, both for camera internal calibration, as well as Eye-to-Hand calibration, to make sure the whole system functions correctly. We present a framework, using a novel combination of well proven methods, allowing a quick automatic calibration for the integration of systems consisting of the robot and a varying number of 3D cameras by using a standard checkerboard calibration grid. Our approach allows a quick camera-to-robot recalibration after any changes to the setup, for example when cameras or robot have been repositioned. Modular design of the system ensures flexibility regarding a number of sensors used as well as different hardware choices. The framework has been proven to work by practical experiments to analyze the quality of the calibration versus the number of positions of the checkerboard used for each of the calibration procedures.",
"title": ""
}
] |
scidocsrr
|
636f172b02e5af09431bf0c148ce9de8
|
Swarm intelligence based routing protocol for wireless sensor networks: Survey and future directions
|
[
{
"docid": "510b9b709d8bd40834ed0409d1e83d4d",
"text": "In this paper we describe AntHocNet, an algorithm for routing in mobile ad hoc networks. It is a hybrid algorithm, which combines reactive path setup with proactive path probing, maintenance and improvement. The algorithm is based on the Nature-inspired Ant Colony Optimization framework. Paths are learned by guided Monte Carlo sampling using ant-like agents communicating in a stigmergic way. In an extensive set of simulation experiments, we compare AntHocNet with AODV, a reference algorithm in the field. We show that our algorithm can outperform AODV on different evaluation criteria. AntHocNet’s performance advantage is visible over a broad range of possible network scenarios, and increases for larger, sparser and more mobile networks.",
"title": ""
},
{
"docid": "376c9736ccd7823441fd62c46eee0242",
"text": "Description: Infrastructure for Homeland Security Environments Wireless Sensor Networks helps readers discover the emerging field of low-cost standards-based sensors that promise a high order of spatial and temporal resolution and accuracy in an ever-increasing universe of applications. It shares the latest advances in science and engineering paving the way towards a large plethora of new applications in such areas as infrastructure protection and security, healthcare, energy, food safety, RFID, ZigBee, and processing. Unlike other books on wireless sensor networks that focus on limited topics in the field, this book is a broad introduction that covers all the major technology, standards, and application topics. It contains everything readers need to know to enter this burgeoning field, including current applications and promising research and development; communication and networking protocols; middleware architecture for wireless sensor networks; and security and management. The straightforward and engaging writing style of this book makes even complex concepts and processes easy to follow and understand. In addition, it offers several features that help readers grasp the material and then apply their knowledge in designing their own wireless sensor network systems: Examples illustrate how concepts are applied to the development and application of wireless sensor networks Detailed case studies set forth all the steps of design and implementation needed to solve real-world problems Chapter conclusions that serve as an excellent review by stressing the chapter's key concepts References in each chapter guide readers to in-depth discussions of individual topics This book is ideal for networking designers and engineers who want to fully exploit this new technology and for government employees who are concerned about homeland security. With its examples, it is appropriate for use as a coursebook for upper-level undergraduates and graduate students.",
"title": ""
}
] |
[
{
"docid": "7ca908e7896afc49a0641218e1c4febf",
"text": "Timely and accurate classification and interpretation of high-resolution images are very important for urban planning and disaster rescue. However, as spatial resolution gets finer, it is increasingly difficult to recognize complex patterns in high-resolution remote sensing images. Deep learning offers an efficient strategy to fill the gap between complex image patterns and their semantic labels. However, due to the hierarchical abstract nature of deep learning methods, it is difficult to capture the precise outline of different objects at the pixel level. To further reduce this problem, we propose an object-based deep learning method to accurately classify the high-resolution imagery without intensive human involvement. In this study, high-resolution images were used to accurately classify three different urban scenes: Beijing (China), Pavia (Italy), and Vaihingen (Germany). The proposed method is built on a combination of a deep feature learning strategy and an object-based classification for the interpretation of high-resolution images. Specifically, high-level feature representations extracted through the convolutional neural networks framework have been systematically investigated over five different layer configurations. Furthermore, to improve the classification accuracy, an object-based classification method also has been integrated with the deep learning strategy for more efficient image classification. Experimental results indicate that with the combination of deep learning and object-based classification, it is possible to discriminate different building types in Beijing Scene, such as commercial buildings and residential buildings with classification accuracies above 90%.",
"title": ""
},
{
"docid": "5ed1a40b933e44f0a7f7240bbca24ab4",
"text": "We present new algorithms for reinforcement learning and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes. After observing that the number of actions required to approach the optimal return is lower bounded by the mixing time T of the optimal policy (in the undiscounted case) or by the horizon time T (in the discounted case), we then give algorithms requiring a number of actions and total computation time that are only polynomial in T and the number of states and actions, for both the undiscounted and discounted cases. An interesting aspect of our algorithms is their explicit handling of the Exploration-Exploitation trade-off.",
"title": ""
},
{
"docid": "90a1fc43ee44634bce3658463503994e",
"text": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.",
"title": ""
},
{
"docid": "61411c55041f40c3b0c63f3ebd4c621f",
"text": "This paper presents an application of neural network approach for the prediction of peak ground acceleration (PGA) using the strong motion data from Turkey, as a soft computing technique to remove uncertainties in attenuation equations. A training algorithm based on the Fletcher–Reeves conjugate gradient back-propagation was developed and employed for three sample sets of strong ground motion. The input variables in the constructed artificial neural network (ANN) model were the magnitude, the source-to-site distance and the site conditions, and the output was the PGA. The generalization capability of ANN algorithms was tested with the same training data. To demonstrate the authenticity of this approach, the network predictions were compared with the ones from regressions for the corresponding attenuation equations. The results indicated that the fitting between the predicted PGA values by the networks and the observed ones yielded high correlation coefficients (R). In addition, comparisons of the correlations by the ANN and the regression method showed that the ANN approach performed better than the regression. Even though the developed ANN models suffered from optimal configuration about the generalization capability, they can be conservatively used to well understand the influence of input parameters for the PGA predictions. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "34f6603912c9775fc48329e596467107",
"text": "Turbo generator with evaporative cooling stator and air cooling rotor possesses many excellent qualities for mid unit. The stator bars and core are immerged in evaporative coolant, which could be cooled fully. The rotor bars are cooled by air inner cooling mode, and the cooling effect compared with hydrogen and water cooling mode is limited. So an effective ventilation system has to been employed to insure the reliability of rotor. This paper presents the comparisons of stator temperature distribution between evaporative cooling mode and air cooling mode, and the designing of rotor ventilation system combined with evaporative cooling stator.",
"title": ""
},
{
"docid": "c4be39977487cdebc8127650c8eda433",
"text": "Unfavorable wake and separated flow from the hull might cause a dramatic decay of the propeller performance in single-screw propelled vessels such as tankers, bulk carriers and containers. For these types of vessels, special attention has to be paid to the design of the stern region, the occurrence of a good flow towards the propeller and rudder being necessary to avoid separation and unsteady loads on the propeller blades and, thus, to minimize fuel consumption and the risk for cavitation erosion and vibrations. The present work deals with the analysis of the propeller inflow in a single-screw chemical tanker vessel affected by massive flow separation in the stern region. Detailed flow measurements by Laser Doppler Velocimetry (LDV) were performed in the propeller region at model scale, in the Large Circulating Water Channel of CNR-INSEAN. Tests were undertaken with and without propeller in order to investigate its effect on the inflow characteristics and the separation mechanisms. In this regard, the study concerned also a phase locked analysis of the propeller perturbation at different distances upstream of the propulsor. The study shows the effectiveness of the 3 order statistical moment (i.e. skewness) for describing the topology of the wake and accurately identifying the portion affected by the detached flow.",
"title": ""
},
{
"docid": "1909d62daf3df32fad94d6a205cc0a8c",
"text": "Scalability properties of deep neural networks raise key re search questions, particularly as the problems considered become larger and more challenging. This paper expands on the idea of conditional computation introd uce in [2], where the nodes of a deep network are augmented by a set of gating uni ts that determine when a node should be calculated. By factorizing the wei ght matrix into a low-rank approximation, an estimation of the sign of the pr -nonlinearity activation can be efficiently obtained. For networks using rec tifi d-linear hidden units, this implies that the computation of a hidden unit wit h an estimated negative pre-nonlinearity can be omitted altogether, as its val ue will become zero when nonlinearity is applied. For sparse neural networks, this c an result in considerable speed gains. Experimental results using the MNIST and SVHN d ata sets with a fully-connected deep neural network demonstrate the perf ormance robustness of the proposed scheme with respect to the error introduced b y the conditional computation process.",
"title": ""
},
{
"docid": "94e2bfa218791199a59037f9ea882487",
"text": "As a developing discipline, research results in the field of human computer interaction (HCI) tends to be \"soft\". Many workers in the field have argued that the advancement of HCI lies in \"hardening\" the field with quantitative and robust models. In reality, few theoretical, quantitative tools are available in user interface research and development. A rare exception to this is Fitts' law. Extending information theory to human perceptual-motor system, Paul Fitts (1954) found a logarithmic relationship that models speed accuracy tradeoffs in aimed movements. A great number of studies have verified and / or applied Fitts' law to HCI problems, such as pointing performance on a screen, making Fitts' law one of the most intensively studied topic in the HCI literature.",
"title": ""
},
{
"docid": "f64e65df9db7219336eafb20d38bf8cf",
"text": "With predictions that this nursing shortage will be more severe and have a longer duration than has been previously experienced, traditional strategies implemented by employers will have limited success. The aging nursing workforce, low unemployment, and the global nature of this shortage compound the usual factors that contribute to nursing shortages. For sustained change and assurance of an adequate supply of nurses, solutions must be developed in several areas: education, healthcare deliver systems, policy and regulations, and image. This shortage is not solely nursing's issue and requires a collaborative effort among nursing leaders in practice and education, health care executives, government, and the media. This paper poses several ideas of solutions, some already underway in the United States, as a catalyst for readers to initiate local programs.",
"title": ""
},
{
"docid": "d0cdbd1137e9dca85d61b3d90789d030",
"text": "In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).",
"title": ""
},
{
"docid": "0cca7892dc3a741deca22f7699e1ed7e",
"text": "Document polarity detection is a part of sentiment analysis where a document is classified as a positive polarity document or a negative polarity document. The applications of polarity detection are content filtering and opinion mining. Content filtering of negative polarity documents is an important application to protect children from negativity and can be used in security filters of organizations. In this paper, dictionary based method using polarity lexicon and machine learning algorithms are applied for polarity detection of Kannada language documents. In dictionary method, a manually created polarity lexicon of 5043 Kannada words is used and compared with machine learning algorithms like Naïve Bayes and Maximum Entropy. It is observed that performance of Naïve Bayes and Maximum Entropy is better than dictionary based method with accuracy of 0.90, 0.93 and 0.78 respectively.",
"title": ""
},
{
"docid": "a448b5e4e4bd017049226f06ce32fa9d",
"text": "We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator’s action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphoto- realistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 com- pared to the most accurate prior approximation scheme, while being the fastest. We show that our models general- ize across datasets and across resolutions, and investigate a number of extensions of the presented approach.",
"title": ""
},
{
"docid": "6936b03672c64798ca4be118809cc325",
"text": "We present a deep learning framework for accurate visual correspondences and demonstrate its effectiveness for both geometric and semantic matching, spanning across rigid motions to intra-class shape or appearance variations. In contrast to previous CNN-based approaches that optimize a surrogate patch similarity objective, we use deep metric learning to directly learn a feature space that preserves either geometric or semantic similarity. Our fully convolutional architecture, along with a novel correspondence contrastive loss allows faster training by effective reuse of computations, accurate gradient computation through the use of thousands of examples per image pair and faster testing with O(n) feedforward passes for n keypoints, instead of O(n) for typical patch similarity methods. We propose a convolutional spatial transformer to mimic patch normalization in traditional features like SIFT, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations. Extensive experiments on KITTI, PASCAL and CUB-2011 datasets demonstrate the significant advantages of our features over prior works that use either hand-constructed or learned features.",
"title": ""
},
{
"docid": "0131e5a748fb70627746068d33553eca",
"text": "Fast changing, increasingly complex, and diverse computing platforms pose central problems in scientific computing: How to achieve, with reasonable effort, portable optimal performance? We present SPIRAL, which considers this problem for the performance-critical domain of linear digital signal processing (DSP) transforms. For a specified transform, SPIRAL automatically generates high-performance code that is tuned to the given platform. SPIRAL formulates the tuning as an optimization problem and exploits the domain-specific mathematical structure of transform algorithms to implement a feedback-driven optimizer. Similar to a human expert, for a specified transform, SPIRAL \"intelligently\" generates and explores algorithmic and implementation choices to find the best match to the computer's microarchitecture. The \"intelligence\" is provided by search and learning techniques that exploit the structure of the algorithm and implementation space to guide the exploration and optimization. SPIRAL generates high-performance code for a broad set of DSP transforms, including the discrete Fourier transform, other trigonometric transforms, filter transforms, and discrete wavelet transforms. Experimental results show that the code generated by SPIRAL competes with, and sometimes outperforms, the best available human tuned transform library code.",
"title": ""
},
{
"docid": "0d2e5667545ebc9380416f9f625dd836",
"text": "New developments in assistive technology are likely to make an important contribution to the care of elderly people in institutions and at home. Video-monitoring, remote health monitoring, electronic sensors and equipment such as fall detectors, door monitors, bed alerts, pressure mats and smoke and heat alarms can improve older people's safety, security and ability to cope at home. Care at home is often preferable to patients and is usually less expensive for care providers than institutional alternatives.",
"title": ""
},
{
"docid": "e8f15d3689f1047cd05676ebd72cc0fc",
"text": "We argue that in fully-connected networks a phase transition delimits the overand under-parametrized regimes where fitting can or cannot be achieved. Under some general conditions, we show that this transition is sharp for the hinge loss. In the whole over-parametrized regime, poor minima of the loss are not encountered during training since the number of constraints to satisfy is too small to hamper minimization. Our findings support a link between this transition and the generalization properties of the network: as we increase the number of parameters of a given model, starting from an under-parametrized network, we observe that the generalization error displays three phases: (i) initial decay, (ii) increase until the transition point — where it displays a cusp — and (iii) slow decay toward a constant for the rest of the over-parametrized regime. Thereby we identify the region where the classical phenomenon of over-fitting takes place, and the region where the model keeps improving, in line with previous empirical observations for modern neural networks.",
"title": ""
},
{
"docid": "574259df6c01fd0c46160b3f8548e4e7",
"text": "Hashtag has emerged as a widely used concept of popular culture and campaigns, but its implications on people’s privacy have not been investigated so far. In this paper, we present the first systematic analysis of privacy issues induced by hashtags. We concentrate in particular on location, which is recognized as one of the key privacy concerns in the Internet era. By relying on a random forest model, we show that we can infer a user’s precise location from hashtags with accuracy of 70% to 76%, depending on the city. To remedy this situation, we introduce a system called Tagvisor that systematically suggests alternative hashtags if the user-selected ones constitute a threat to location privacy. Tagvisor realizes this by means of three conceptually different obfuscation techniques and a semantics-based metric for measuring the consequent utility loss. Our findings show that obfuscating as little as two hashtags already provides a near-optimal trade-off between privacy and utility in our dataset. This in particular renders Tagvisor highly time-efficient, and thus, practical in real-world settings.",
"title": ""
},
{
"docid": "1a5b28583eaf7cab8cc724966d700674",
"text": "Advertising (ad) revenue plays a vital role in supporting free websites. When the revenue dips or increases sharply, ad system operators must find and fix the rootcause if actionable, for example, by optimizing infrastructure performance. Such revenue debugging is analogous to diagnosis and root-cause analysis in the systems literature but is more general. Failure of infrastructure elements is only one potential cause; a host of other dimensions (e.g., advertiser, device type) can be sources of potential causes. Further, the problem is complicated by derived measures such as costs-per-click that are also tracked along with revenue. Our paper takes the first systematic look at revenue debugging. Using the concepts of explanatory power, succinctness, and surprise, we propose a new multidimensional root-cause algorithm for fundamental and derived measures of ad systems to identify the dimension mostly likely to blame. Further, we implement the attribution algorithm and a visualization interface in a tool called the Adtributor to help troubleshooters quickly identify potential causes. Based on several case studies on a very large ad system and extensive evaluation, we show that the Adtributor has an accuracy of over 95% and helps cut down troubleshooting time by an order of magnitude.",
"title": ""
},
{
"docid": "30279db171fffe6fac561541a5d175ca",
"text": "Deformable displays can provide two major benefits compared to rigid displays: Objects of different shapes and deformabilities, situated in our physical environment, can be equipped with deformable displays, and users can benefit from their pre-existing knowledge about the interaction with physical objects when interacting with deformable displays. In this article we present InformationSense, a large, highly deformable cloth display. The article contributes to two research areas in the context of deformable displays: It presents an approach for the tracking of large, highly deformable surfaces, and it presents one of the first UX analyses of cloth displays that will help with the design of future interaction techniques for this kind of display. The comparison of InformationSense with a rigid display interface unveiled the trade-off that while users are able to interact with InformationSense more naturally and significantly preferred InformationSense in terms of joy of use, they preferred the rigid display interfaces in terms of efficiency. This suggests that deformable displays are already suitable if high hedonic qualities are important but need to be enhanced with additional digital power if high pragmatic qualities are required.",
"title": ""
},
{
"docid": "18e5b72779f6860e2a0f2ec7251b0718",
"text": "This paper presents a novel dielectric resonator filter exploiting dual TM11 degenerate modes. The dielectric rod resonators are short circuited on the top and bottom surfaces to the metallic cavity. The dual-mode cavities can be conveniently arranged in many practical coupling configurations. Through-holes in height direction are made in each of the dielectric rods for the frequency tuning and coupling screws. All the coupling elements, including inter-cavity coupling elements, are accessible from the top of the filter cavity. This planar coupling configuration is very attractive for composing a diplexer or a parallel multifilter assembly using the proposed filter structure. To demonstrate the new filter technology, two eight-pole filters with cross-couplings for UMTS band are prototyped and tested. It has been experimentally shown that as compared to a coaxial combline filter with a similar unloaded Q, the proposed dual-mode filter can save filter volume by more than 50%. Moreover, a simple method that can effectively suppress the lower band spurious mode is also presented.",
"title": ""
}
] |
scidocsrr
|
447617c2bca7b7adc981fd69a451a183
|
Object-Centric Anomaly Detection by Attribute-Based Reasoning
|
[
{
"docid": "704d068f791a8911068671cb3dca7d55",
"text": "Most models of visual search, whether involving overt eye movements or covert shifts of attention, are based on the concept of a saliency map, that is, an explicit two-dimensional map that encodes the saliency or conspicuity of objects in the visual environment. Competition among neurons in this map gives rise to a single winning location that corresponds to the next attended target. Inhibiting this location automatically allows the system to attend to the next most salient location. We describe a detailed computer implementation of such a scheme, focusing on the problem of combining information across modalities, here orientation, intensity and color information, in a purely stimulus-driven manner. The model is applied to common psychophysical stimuli as well as to a very demanding visual search task. Its successful performance is used to address the extent to which the primate visual system carries out visual search via one or more such saliency maps and how this can be tested.",
"title": ""
}
] |
[
{
"docid": "9113e4ba998ec12dd2536073baf40610",
"text": "Fast adaptation of deep neural networks (DNN) is an important research topic in deep learning. In this paper, we have proposed a general adaptation scheme for DNN based on discriminant condition codes, which are directly fed to various layers of a pre-trained DNN through a new set of connection weights. Moreover, we present several training methods to learn connection weights from training data as well as the corresponding adaptation methods to learn new condition code from adaptation data for each new test condition. In this work, the fast adaptation scheme is applied to supervised speaker adaptation in speech recognition based on either frame-level cross-entropy or sequence-level maximum mutual information training criterion. We have proposed three different ways to apply this adaptation scheme based on the so-called speaker codes: i) Nonlinear feature normalization in feature space; ii) Direct model adaptation of DNN based on speaker codes; iii) Joint speaker adaptive training with speaker codes. We have evaluated the proposed adaptation methods in two standard speech recognition tasks, namely TIMIT phone recognition and large vocabulary speech recognition in the Switchboard task. Experimental results have shown that all three methods are quite effective to adapt large DNN models using only a small amount of adaptation data. For example, the Switchboard results have shown that the proposed speaker-code-based adaptation methods may achieve up to 8-10% relative error reduction using only a few dozens of adaptation utterances per speaker. Finally, we have achieved very good performance in Switchboard (12.1% in WER) after speaker adaptation using sequence training criterion, which is very close to the best performance reported in this task (\"Deep convolutional neural networks for LVCSR,\" T. N. Sainath et al., Proc. IEEE Acoust., Speech, Signal Process., 2013).",
"title": ""
},
{
"docid": "c07a0053f43d9e1f98bb15d4af92a659",
"text": "We present a zero-shot learning approach for text classification, predicting which natural language understanding domain can handle a given utterance. Our approach can predict domains at runtime that did not exist at training time. We achieve this extensibility by learning to project utterances and domains into the same embedding space while generating each domain-specific embedding from a set of attributes that characterize the domain. Our model is a neural network trained via ranking loss. We evaluate the performance of this zero-shot approach on a subset of a virtual assistant’s third-party domains and show the effectiveness of the technique on new domains not observed during training. We compare to generative baselines and show that our approach requires less storage and performs better on new domains.",
"title": ""
},
{
"docid": "a4488fdd33bab600bd4de1f02e3a418e",
"text": "An antidote for reproductive learning is to engage learners in active manipulative), constructive, intentional, complex, authentic, cooperative (collaborative and conversational), and reflective learning activities. Those characteristics are the goal of constructivist learning environments (CLEs). This paper presents a model for designing CLEs, which surround a problem/project/issue/question (including problem context, problem representation space, and problem, manipulation space) with related cases (to supplant learners’ lack of experiences and convey complexity), information resources that support knowledge construction, cognitive tools, conversation and collaboration tools, and social-contextual support for implementation. These components are supported by instructional supports, including modeling, coaching, and scaffolding. This model is directly applicable to web-based learning. Examples of web-CLEs will be demonstrated in the presentation.",
"title": ""
},
{
"docid": "b81c7806be48b25497c84cd1e623f6fc",
"text": "Time-of-flight range sensors have error characteristics, which are complementary to passive stereo. They provide real-time depth estimates in conditions where passive stereo does not work well, such as on white walls. In contrast, these sensors are noisy and often perform poorly on the textured scenes where stereo excels. We explore their complementary characteristics and introduce a method for combining the results from both methods that achieve better accuracy than either alone. In our fusion framework, the depth probability distribution functions from each of these sensor modalities are formulated and optimized. Robust and adaptive fusion is built on a pixel-wise reliability weighting function calculated for each method. In addition, since time-of-flight devices have primarily been used as individual sensors, they are typically poorly calibrated. We introduce a method that substantially improves upon the manufacturer's calibration. We demonstrate that our proposed techniques lead to improved accuracy and robustness on an extensive set of experimental results.",
"title": ""
},
{
"docid": "6adfcf6aec7b33a82e3e5e606c93295d",
"text": "Cyber security is a serious global concern. The potential of cyber terrorism has posed a threat to national security; meanwhile the increasing prevalence of malware and incidents of cyber attacks hinder the utilization of the Internet to its greatest benefit and incur significant economic losses to individuals, enterprises, and public organizations. This paper presents some recent advances in intrusion detection, feature selection, and malware detection. In intrusion detection, stealthy and low profile attacks that include only few carefully crafted packets over an extended period of time to delude firewalls and the intrusion detection system (IDS) have been difficult to detect. In protection against malware (trojans, worms, viruses, etc.), how to detect polymorphic and metamorphic versions of recognized malware using static scanners is a great challenge. We present in this paper an agent based IDS architecture that is capable of detecting probe attacks at the originating host and denial of service (DoS) attacks at the boundary controllers. We investigate and compare the performance of different classifiers implemented for intrusion detection purposes. Further, we study the performance of the classifiers in real-time detection of probes and DoS attacks, with respect to intrusion data collected on a real operating network that includes a variety of simulated attacks. Feature selection is as important for IDS as it is for many other modeling problems. We present several techniques for feature selection and compare their performance in the IDS application. It is demonstrated that, with appropriately chosen features, both probes and DoS attacks can be detected in real time or near real time at the originating host or at the boundary controllers. We also briefly present some encouraging recent results in detecting polymorphic and metamorphic malware with advanced static, signature-based scanning techniques.",
"title": ""
},
{
"docid": "7f8ee14d2d185798c3864178bd450f3d",
"text": "In this paper, a new sensing device that can simultaneously monitor traffic congestion and urban flash floods is presented. This sensing device is based on the combination of passive infrared sensors (PIRs) and ultrasonic rangefinder, and is used for real-time vehicle detection, classification, and speed estimation in the context of wireless sensor networks. This framework relies on dynamic Bayesian Networks to fuse heterogeneous data both spatially and temporally for vehicle detection. To estimate the speed of the incoming vehicles, we first use cross correlation and wavelet transform-based methods to estimate the time delay between the signals of different sensors. We then propose a calibration and self-correction model based on Bayesian Networks to make a joint inference by all sensors about the speed and the length of the detected vehicle. Furthermore, we use the measurements of the ultrasonic and the PIR sensors to perform vehicle classification. Validation data (using an experimental dual infrared and ultrasonic traffic sensor) show a 99% accuracy in vehicle detection, a mean error of 5 kph in vehicle speed estimation, a mean error of 0.7m in vehicle length estimation, and a high accuracy in vehicle classification. Finally, we discuss the computational performance of the algorithm, and show that this framework can be implemented on low-power computational devices within a wireless sensor network setting. Such decentralized processing greatly improves the energy consumption of the system and minimizes bandwidth usage.",
"title": ""
},
{
"docid": "e281a8dc16b10dff80fad36d149a8a2f",
"text": "We present a tree router for multichip systems that guarantees deadlock-free multicast packet routing without dropping packets or restricting their length. Multicast routing is required to efficiently connect massively parallel systems' computational units when each unit is connected to thousands of others residing on multiple chips, which is the case in neuromorphic systems. Our tree router implements this one-to-many routing by branching recursively-broadcasting the packet within a specified subtree. Within this subtree, the packet is only accepted by chips that have been programmed to do so. This approach boosts throughput because memory look-ups are avoided enroute, and keeps the header compact because it only specifies the route to the subtree's root. Deadlock is avoided by routing in two phases-an upward phase and a downward phase-and by restricting branching to the downward phase. This design is the first fully implemented wormhole router with packet-branching that can never deadlock. The design's effectiveness is demonstrated in Neurogrid, a million-neuron neuromorphic system consisting of sixteen chips. Each chip has a 256 × 256 silicon-neuron array integrated with a full-custom asynchronous VLSI implementation of the router that delivers up to 1.17 G words/s across the sixteen-chip network with less than 1 μs jitter.",
"title": ""
},
{
"docid": "566a2b2ff835d10e0660fb89fd6ae618",
"text": "We argue that an understanding of the faculty of language requires substantial interdisciplinary cooperation. We suggest how current developments in linguistics can be profitably wedded to work in evolutionary biology, anthropology, psychology, and neuroscience. We submit that a distinction should be made between the faculty of language in the broad sense (FLB) and in the narrow sense (FLN). FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language. We further argue that FLN may have evolved for reasons other than language, hence comparative studies might look for evidence of such computations outside of the domain of communication (for example, number, navigation, and social relations).",
"title": ""
},
{
"docid": "ec641ace6df07156891f2bf40ea5d072",
"text": "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.",
"title": ""
},
{
"docid": "33cce2750db6e1f680e8a6a2c89ad30a",
"text": "Present theories of visual recognition emphasize the role of interactive processing across populations of neurons within a given network, but the nature of these interactions remains unresolved. In particular, data describing the sufficiency of feedforward algorithms for conscious vision and studies revealing the functional relevance of feedback connections to the striate cortex seem to offer contradictory accounts of visual information processing. TMS is a good method to experimentally address this issue, given its excellent temporal resolution and its capacity to establish causal relations between brain function and behavior. We studied 20 healthy volunteers in a visual recognition task. Subjects were briefly presented with images of animals (birds or mammals) in natural scenes and were asked to indicate the animal category. MRI-guided stereotaxic single TMS pulses were used to transiently disrupt striate cortex function at different times after image onset (SOA). Visual recognition was significantly impaired when TMS was applied over the occipital pole at SOAs of 100 and 220 msec. The first interval has consistently been described in previous TMS studies and is explained as the interruption of the feedforward volley of activity. Given the late latency and discrete nature of the second peak, we hypothesize that it represents the disruption of a feedback projection to V1, probably from other areas in the visual network. These results provide causal evidence for the necessity of recurrent interactive processing, through feedforward and feedback connections, in visual recognition of natural complex images.",
"title": ""
},
{
"docid": "62309d3434c39ea5f9f901f8eb635539",
"text": "The flap design according Karaca et al., used during surgery for removal of impacted third molars prevents complications related to 2 molar periodontal status [125]. Suarez et al. believe that this design influences healing primary [122]. This prevents wound dehiscence and evaluated the suture technique to achieve this closure to Sanchis et al. [124], believe that primary closure avoids draining the socket and worse postoperative inflammation and pain, choose to place drains, obtaining a less postoperative painful [127].",
"title": ""
},
{
"docid": "cda5c6908b4f52728659f89bb082d030",
"text": "Until a few years ago the diagnosis of hair shaft disorders was based on light microscopy or scanning electron microscopy on plucked or cut samples of hair. Dermatoscopy is a new fast, noninvasive, and cost-efficient technique for easy in-office diagnosis of all hair shaft abnormalities including conditions such as pili trianguli and canaliculi that are not recognizable by examining hair shafts under the light microscope. It can also be used to identify disease limited to the eyebrows or eyelashes. Dermatoscopy allows for fast examination of the entire scalp and is very helpful to identify the affected hair shafts when the disease is focal.",
"title": ""
},
{
"docid": "dbf3650aadb4c18500ec3676d23dba99",
"text": "Current search engines do not, in general, perform well with longer, more verbose queries. One of the main issues in processing these queries is identifying the key concepts that will have the most impact on effectiveness. In this paper, we develop and evaluate a technique that uses query-dependent, corpus-dependent, and corpus-independent features for automatic extraction of key concepts from verbose queries. We show that our method achieves higher accuracy in the identification of key concepts than standard weighting methods such as inverse document frequency. Finally, we propose a probabilistic model for integrating the weighted key concepts identified by our method into a query, and demonstrate that this integration significantly improves retrieval effectiveness for a large set of natural language description queries derived from TREC topics on several newswire and web collections.",
"title": ""
},
{
"docid": "577f90976559e45c56bc4ca8004f990f",
"text": "In this paper, we address the problem of recognizing images with weakly annotated text tags. Most previous work either cannot be applied to the scenarios where the tags are loosely related to the images, or simply take a pre-fusion at the feature level or a post-fusion at the decision level to combine the visual and textual content. Instead, we first encode the text tags as the relations among the images, and then propose a semi-supervised relational topic model (ss-RTM) to explicitly model the image content and their relations. In such way, we can efficiently leverage the loosely related tags, and build an intermediate level representation for a collection of weakly annotated images. The intermediate level representation can be regarded as a mid-level fusion of the visual and textual content, which is able to explicitly model their intrinsic relationships. Moreover, image category labels are also modeled in the ss-RTM, and recognition can be conducted without training an additional discriminative classifier. Our extensive experiments on social multimedia datasets (images+tags) demonstrated the advantages of the proposed model.",
"title": ""
},
{
"docid": "c5efe5fe7c945e48f272496e7c92bb9c",
"text": "Knowing when a classifier’s prediction can be trusted is useful in many applications and critical for safely using AI. While the bulk of the effort in machine learning research has been towards improving classifier performance, understanding when a classifier’s predictions should and should not be trusted has received far less attention. The standard approach is to use the classifier’s discriminant or confidence score; however, we show there exists an alternative that is more effective in many situations. We propose a new score, called the trust score, which measures the agreement between the classifier and a modified nearest-neighbor classifier on the testing example. We show empirically that high (low) trust scores produce surprisingly high precision at identifying correctly (incorrectly) classified examples, consistently outperforming the classifier’s confidence score as well as many other baselines. Further, under some mild distributional assumptions, we show that if the trust score for an example is high (low), the classifier will likely agree (disagree) with the Bayes-optimal classifier. Our guarantees consist of non-asymptotic rates of statistical consistency under various nonparametric settings and build on recent developments in topological data analysis.",
"title": ""
},
{
"docid": "44f2eaf0219f44a82a9967ec9a9d36cd",
"text": "Two measures of social function designed for community studies of normal aging and mild senile dementia were evaluated in 195 older adults who underwent neurological, cognitive, and affective assessment. An examining and a reviewing neurologist and a neurologically trained nurse independently rated each on a Scale of Functional Capacity. Interrater reliability was high (examining vs. reviewing neurologist, r = .97; examining neurologist vs. nurse, tau b = .802; p less than .001 for both comparisons). Estimates correlated well with an established measure of social function and with results of cognitive tests. Alternate informants evaluated participants on the Functional Activities Questionnaire and the Instrumental Activities of Daily Living Scale. The Functional Activities Questionnaire was superior to the Instrumental Activities of Daily scores. Used alone as a diagnostic tool, the Functional Activities Questionnaire was more sensitive than distinguishing between normal and demented individuals.",
"title": ""
},
{
"docid": "328052245c3a5144c492e761e7f51bae",
"text": "The screening of novel materials with good performance and the modelling of quantitative structureactivity relationships (QSARs), among other issues, are hot topics in the field of materials science. Traditional experiments and computational modelling often consume tremendous time and resources and are limited by their experimental conditions and theoretical foundations. Thus, it is imperative to develop a new method of accelerating the discovery and design process for novel materials. Recently, materials discovery and design using machine learning have been receiving increasing attention and have achieved great improvements in both time efficiency and prediction accuracy. In this review, we first outline the typical mode of and basic procedures for applying machine learning in materials science, and we classify and compare the main algorithms. Then, the current research status is reviewed with regard to applications of machine learning in material property prediction, in new materials discovery and for other purposes. Finally, we discuss problems related to machine learning in materials science, propose possible solutions, and forecast potential directions of future research. By directly combining computational studies with experiments, we hope to provide insight into the parameters that affect the properties of materials, thereby enabling more efficient and target-oriented research on materials dis-",
"title": ""
},
{
"docid": "f2fd1bee7b2770bbf808d8902f4964b4",
"text": "Antimicrobial and antiquorum sensing (AQS) activities of fourteen ethanolic extracts of different parts of eight plants were screened against four Gram-positive, five Gram-negative bacteria and four fungi. Depending on the plant part extract used and the test microorganism, variable activities were recorded at 3 mg per disc. Among the Grampositive bacteria tested, for example, activities of Laurus nobilis bark extract ranged between a 9.5 mm inhibition zone against Bacillus subtilis up to a 25 mm one against methicillin resistant Staphylococcus aureus. Staphylococcus aureus and Aspergillus fumigatus were the most susceptible among bacteria and fungi tested towards other plant parts. Of interest is the tangible antifungal activity of a Tecoma capensis flower extract, which is reported for the first time. However, minimum inhibitory concentrations (MIC's) for both bacteria and fungi were relatively high (0.5-3.0 mg). As for antiquorum sensing activity against Chromobacterium violaceum, superior activity (>17 mm QS inhibition) was associated with Sonchus oleraceus and Laurus nobilis extracts and weak to good activity (8-17 mm) was recorded for other plants. In conclusion, results indicate the potential of these plant extracts in treating microbial infections through cell growth inhibition or quorum sensing antagonism, which is reported for the first time, thus validating their medicinal use.",
"title": ""
},
{
"docid": "c460da4083842102fcf2a59ef73702a1",
"text": "I describe two aspects of metacognition, knowledge of cognition and regulation of cognition, and how they are related to domain-specific knowledge and cognitive abilities. I argue that metacognitive knowledge is multidimensional, domain-general in nature, and teachable. Four instructional strategies are described for promoting the construction and acquisition of metacognitive awareness. These include promoting general awareness, improving selfknowledge and regulatory skills, and promoting learning environments that are conducive to the construction and use of metacognition. This paper makes three proposals: (a) metacognition is a multidimensional phenomenon, (b) it is domain-general in nature, and (c) metacognitive knowledge and regulation can be improved using a variety of instructional strategies. Let me acknowledge at the beginning that each of these proposals is somewhat speculative. While there is a limited amount of research that supports them, more research is needed to clarify them. Each one of these proposals is addressed in a separate section of the paper. The first makes a distinction between knowledge of cognition and regulation of cognition. The second summarizes some of the recent research examining the relationship of metacognition to expertise and cognitive abilities. The third section describes four general instructional strategies for improving metacognition. These include fostering construction of new knowledge, explicating conditional knowledge, automatizing a monitoring heuristic, and creating a supportive motivational environment in the classroom. I conclude with a few thoughts about general cognitive skills instruction. A framework for understanding metacognition Researchers have been studying metacognition for over twenty years. Most agree that cognition and metacognition differ in that cognitive skills are necessary to perform a task, while metacognition is necessary to understand how the task was performed (Garner, 1987). Most researchers also make a VICTORY: PIPS No.: 136750 LAWKAP truchh7.tex; 9/12/1997; 18:12; v.6; p.1",
"title": ""
},
{
"docid": "2fc05946c4e17c0ca199cc8896e38362",
"text": "Hierarchical multilabel classification allows a sample to belong to multiple class labels residing on a hierarchy, which can be a tree or directed acyclic graph (DAG). However, popular hierarchical loss functions, such as the H-loss, can only be defined on tree hierarchies (but not on DAGs), and may also under- or over-penalize misclassifications near the bottom of the hierarchy. Besides, it has been relatively unexplored on how to make use of the loss functions in hierarchical multilabel classification. To overcome these deficiencies, we first propose hierarchical extensions of the Hamming loss and ranking loss which take the mistake at every node of the label hierarchy into consideration. Then, we first train a general learning model, which is independent of the loss function. Next, using Bayesian decision theory, we develop Bayes-optimal predictions that minimize the corresponding risks with the trained model. Computationally, instead of requiring an exhaustive summation and search for the optimal multilabel, the resultant optimization problem can be efficiently solved by a greedy algorithm. Experimental results on a number of real-world data sets show that the proposed Bayes-optimal classifier outperforms state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
f6f69455167c9a7c1df696807904885f
|
AN IMAGE PROCESSING AND NEURAL NETWORK BASED APPROACH FOR DETECTION AND CLASSIFICATION OF PLANT LEAF DISEASES
|
[
{
"docid": "9aa3a9b8fb22ba929146298386ca9e57",
"text": "Since current grading of plant diseases is mainly based on eyeballing, a new method is developed based on computer image processing. All influencing factors existed in the process of image segmentation was analyzed and leaf region was segmented by using Otsu method. In the HSI color system, H component was chosen to segment disease spot to reduce the disturbance of illumination changes and the vein. Then, disease spot regions were segmented by using Sobel operator to examine disease spot edges. Finally, plant diseases are graded by calculating the quotient of disease spot and leaf areas. Researches indicate that this method to grade plant leaf spot diseases is fast and accurate.",
"title": ""
}
] |
[
{
"docid": "f92f0a3d46eaf14e478a41f87b8ad369",
"text": "The agricultural productivity of India is gradually declining due to destruction of crops by various natural calamities and the crop rotation process being affected by irregular climate patterns. Also, the interest and efforts put by farmers lessen as they grow old which forces them to sell their agricultural lands, which automatically affects the production of agricultural crops and dairy products. This paper mainly focuses on the ways by which we can protect the crops during an unavoidable natural disaster and implement technology induced smart agro-environment, which can help the farmer manage large fields with less effort. Three common issues faced during agricultural practice are shearing furrows in case of excess rain or flood, manual watering of plants and security against animal grazing. This paper provides a solution for these problems by helping farmer monitor and control various activities through his mobile via GSM and DTMF technology in which data is transmitted from various sensors placed in the agricultural field to the controller and the status of the agricultural parameters are notified to the farmer using which he can take decisions accordingly. The main advantage of this system is that it is semi-automated i.e. the decision is made by the farmer instead of fully automated decision that results in precision agriculture. It also overcomes the existing traditional practices that require high money investment, energy, labour and time.",
"title": ""
},
{
"docid": "4dbd25b0c93b702d93513601b41553b0",
"text": "The last decade has seen a growing interest in air quality monitoring using networks of wireless low-cost sensor platforms. One of the unifying characteristics of chemical sensors typically used in real-world deployments is their slow response time. While the impact of sensor dynamics can largely be neglected when considering static scenarios, in mobile applications chemical sensor measurements should not be considered as point measurements (i.e. instantaneous in space and time). In this paper, we study the impact of sensor dynamics on measurement accuracy and locality through systematic experiments in the controlled environment of a wind tunnel. We then propose two methods for dealing with this problem: (i) reducing the effect of the sensor’s slow dynamics by using an open active sampler, and (ii) estimating the underlying true signal using a sensor model and a deconvolution technique. We consider two performance metrics for evaluation: localization accuracy of specific field features and root mean squared error in field estimation. Finally, we show that the deconvolution technique results in consistent performance improvement for all the considered scenarios, and for both metrics, while the active sniffer design considered provides an advantage only for feature localization, particularly for the highest sensor movement speed.",
"title": ""
},
{
"docid": "b6f9d5015fddbf92ab44ae6ce2f7d613",
"text": "Emojis are small images that are commonly included in social media text messages. The combination of visual and textual content in the same message builds up a modern way of communication, that automatic systems are not used to deal with. In this paper we extend recent advances in emoji prediction by putting forward a multimodal approach that is able to predict emojis in Instagram posts. Instagram posts are composed of pictures together with texts which sometimes include emojis. We show that these emojis can be predicted by using the text, but also using the picture. Our main finding is that incorporating the two synergistic modalities, in a combined model, improves accuracy in an emoji prediction task. This result demonstrates that these two modalities (text and images) encode different information on the use of emojis and therefore can complement each other.",
"title": ""
},
{
"docid": "a7656eb3b0443ef88ef4bb134a4f3a55",
"text": "A simple methodology is described – the multi-turbine power curve approach – a methodology to generate a qualified estimate of the time series of the aggregated power generation from planned wind turbine units distributed in an area where limited wind time series are available. This is often the situation in a planning phase where you want to simulate planned expansions in a power system with wind power. The methodology is described in a stepby-step guideline.",
"title": ""
},
{
"docid": "4af7fe3bbfcd5874f1e0607ceeda97ab",
"text": "Personality psychology addresses views of human nature and individual differences. Biological and goal-based views of human nature provide an especially useful basis for construing coping; the five-factor model of traits adds a useful set of individual differences. Coping-responses to adversity and to the distress that results-is categorized in many ways. Meta-analyses link optimism, extraversion, conscientiousness, and openness to more engagement coping; neuroticism to more disengagement coping; and optimism, conscientiousness, and agreeableness to less disengagement coping. Relations of traits to specific coping responses reveal a more nuanced picture. Several moderators of these associations also emerge: age, stressor severity, and temporal proximity between the coping activity and the coping report. Personality and coping play both independent and interactive roles in influencing physical and mental health. Recommendations are presented for ways future research can expand on the growing understanding of how personality and coping shape adjustment to stress.",
"title": ""
},
{
"docid": "f6c874435978db83361f62bfe70a6681",
"text": "“Microbiology Topics” discusses various topics in microbiology of practical use in validation and compliance. We intend this column to be a useful resource for daily work applications. Reader comments, questions, and suggestions are needed to help us fulfill our objective for this column. Please send your comments and suggestions to column coordinator Scott Sutton at scott. [email protected] or journal managing editor Susan Haigney at [email protected].",
"title": ""
},
{
"docid": "adae03c768e3bc72f325075cf22ef7b1",
"text": "The vergence-accommodation conflict (VAC) remains a major problem in head-mounted displays for virtual and augmented reality (VR and AR). In this review, I discuss why this problem is pivotal for nearby tasks in VR and AR, present a comprehensive taxonomy of potential solutions, address advantages and shortfalls of each design, and cover various ways to better evaluate the solutions. The review describes how VAC is addressed in monocular, stereoscopic, and multiscopic HMDs, including retinal scanning and accommodation-free displays. Eye-tracking-based approaches that do not provide natural focal cues-gaze-guided blur and dynamic stereoscopy-are also covered. Promising future research directions in this area are identified.",
"title": ""
},
{
"docid": "cbc2b592efc227a5c6308edfbca51bd6",
"text": "The rapidly growing presence of Internet of Things (IoT) devices is becoming a continuously alluring playground for malicious actors who try to harness their vast numbers and diverse locations. One of their primary goals is to assemble botnets that can serve their nefarious purposes, ranging from Denial of Service (DoS) to spam and advertisement fraud. The most recent example that highlights the severity of the problem is the Mirai family of malware, which is accountable for a plethora of massive DDoS attacks of unprecedented volume and diversity. The aim of this paper is to offer a comprehensive state-of-the-art review of the IoT botnet landscape and the underlying reasons of its success with a particular focus on Mirai and major similar worms. To this end, we provide extensive details on the internal workings of IoT malware, examine their interrelationships, and elaborate on the possible strategies for defending against them.",
"title": ""
},
{
"docid": "9ea0612f646228a3da41b7f55c23e825",
"text": "It is shown that many published models for the Stanford Question Answering Dataset (Rajpurkar et al., 2016) lack robustness, suffering an over 50% decrease in F1 score during adversarial evaluation based on the AddSent (Jia and Liang, 2017) algorithm. It has also been shown that retraining models on data generated by AddSent has limited effect on their robustness. We propose a novel alternative adversary-generation algorithm, AddSentDiverse, that significantly increases the variance within the adversarial training data by providing effective examples that punish the model for making certain superficial assumptions. Further, in order to improve robustness to AddSent’s semantic perturbations (e.g., antonyms), we jointly improve the model’s semantic-relationship learning capabilities in addition to our AddSentDiversebased adversarial training data augmentation. With these additions, we show that we can make a state-of-the-art model significantly more robust, achieving a 36.5% increase in F1 score under many different types of adversarial evaluation while maintaining performance on the regular SQuAD task.",
"title": ""
},
{
"docid": "dc1cfdda40b23849f11187ce890c8f8b",
"text": "Controlled sharing of information is needed and desirable for many applications and is supported in operating systems by access control mechanisms. This paper shows how to extend programming languages to provide controlled sharing. The extension permits expression of access constraints on shared data. Access constraints can apply both to simple objects, and to objects that are components of larger objects, such as bank account records in a bank's data base. The constraints are stated declaratively, and can be enforced by static checking similar to type checking. The approach can be used to extend any strongly-typed language, but is particularly suitable for extending languages that support the notion of abstract data types.",
"title": ""
},
{
"docid": "0a4749ecc23cb04f494a987268704f0f",
"text": "With the growing demand for digital information in health care, the electronic medical record (EMR) represents the foundation of health information technology. It is essential, however, in an industry still largely dominated by paper-based records, that such systems be accepted and used. This research evaluates registered nurses’, certified nurse practitioners and physician assistants’ acceptance of EMR’s as a means to predict, define and enhance use. The research utilizes the Unified Theory of Acceptance and Use of Technology (UTAUT) as the theoretical model, along with the Partial Least Square (PLS) analysis to estimate the variance. Overall, the findings indicate that UTAUT is able to provide a reasonable assessment of health care professionals’ acceptance of EMR’s with social influence a significant determinant of intention and use.",
"title": ""
},
{
"docid": "b06fd59d5acdf6dd0b896a62f5d8b123",
"text": "BACKGROUND\nHippocampal volume reduction has been reported inconsistently in people with major depression.\n\n\nAIMS\nTo evaluate the interrelationships between hippocampal volumes, memory and key clinical, vascular and genetic risk factors.\n\n\nMETHOD\nTotals of 66 people with depression and 20 control participants underwent magnetic resonance imaging and clinical assessment. Measures of depression severity, psychomotor retardation, verbal and visual memory and vascular and specific genetic risk factors were collected.\n\n\nRESULTS\nReduced hippocampal volumes occurred in older people with depression, those with both early-onset and late-onset disorders and those with the melancholic subtype. Reduced hippocampal volumes were associated with deficits in visual and verbal memory performance.\n\n\nCONCLUSIONS\nAlthough reduced hippocampal volumes are most pronounced in late-onset depression, older people with early-onset disorders also display volume changes and memory loss. No clear vascular or genetic risk factors explain these findings. Hippocampal volume changes may explain how depression emerges as a risk factor to dementia.",
"title": ""
},
{
"docid": "fe318971645b171929188b091425a8ac",
"text": "Metal interconnections are expected to become the limiting factor for the performance of electronic systems as transistors continue to shrink in size. Replacing them by optical interconnections, at different levels ranging from rack-to-rack down to chip-to-chip and intra-chip interconnections, could provide the low power dissipation, low latencies and high bandwidths that are needed. The implementation of optical interconnections relies on the development of micro-optical devices that are integrated with the microelectronics on chips. Recent demonstrations of silicon low-loss waveguides, light emitters, amplifiers and lasers approach this goal, but a small silicon electro-optic modulator with a size small enough for chip-scale integration has not yet been demonstrated. Here we experimentally demonstrate a high-speed electro-optical modulator in compact silicon structures. The modulator is based on a resonant light-confining structure that enhances the sensitivity of light to small changes in refractive index of the silicon and also enables high-speed operation. The modulator is 12 micrometres in diameter, three orders of magnitude smaller than previously demonstrated. Electro-optic modulators are one of the most critical components in optoelectronic integration, and decreasing their size may enable novel chip architectures.",
"title": ""
},
{
"docid": "ed8ee467e7f40d6ba35cc6f8329ca681",
"text": "This paper proposes an architecture for Software Defined Optical Transport Networks. The SDN Controller includes a network abstraction layer allowing the implementation of cognitive controls and policies for autonomic operation, based on global network view. Additionally, the controller implements a virtualized GMPLS control plane, offloading and simplifying the network elements, while unlocking the implementation of new services such as optical VPNs, optical network slicing, and keeping standard OIF interfaces, such as UNI and NNI. The concepts have been implemented and validated in a real testbed network formed by five DWDM nodes equipped with flexgrid WSS ROADMs.",
"title": ""
},
{
"docid": "56b706edc6d1b6a2ff64770cb3f79c2e",
"text": "The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, computer Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. However, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo methods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper, we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.",
"title": ""
},
{
"docid": "aef25b8bc64bb624fb22ce39ad7cad89",
"text": "Depth estimation and semantic segmentation are two fundamental problems in image understanding. While the two tasks are strongly correlated and mutually beneficial, they are usually solved separately or sequentially. Motivated by the complementary properties of the two tasks, we propose a unified framework for joint depth and semantic prediction. Given an image, we first use a trained Convolutional Neural Network (CNN) to jointly predict a global layout composed of pixel-wise depth values and semantic labels. By allowing for interactions between the depth and semantic information, the joint network provides more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction [6]. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction under the guidance of global layout. Utilizing the pixel-wise global prediction and region-wise local prediction, we formulate the inference problem in a two-layer Hierarchical Conditional Random Field (HCRF) to produce the final depth and semantic map. As demonstrated in the experiments, our approach effectively leverages the advantages of both tasks and provides the state-of-the-art results.",
"title": ""
},
{
"docid": "dfd16d21384cf722866c22d30b3f6a18",
"text": "The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically N<;20) and usually limited to a single type of lung sound. Larger research studies have also been impeded by the challenge of labeling large volumes of data, which is extremely labor-intensive. In this paper, we present the development of a semi-supervised deep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.",
"title": ""
},
{
"docid": "5447d3fe8ed886a8792a3d8d504eaf44",
"text": "Glucose-responsive delivery of insulin mimicking the function of pancreatic β-cells to achieve meticulous control of blood glucose (BG) would revolutionize diabetes care. Here the authors report the development of a new glucose-responsive insulin delivery system based on the potential interaction between the glucose derivative-modified insulin (Glc-Insulin) and glucose transporters on erythrocytes (or red blood cells, RBCs) membrane. After being conjugated with the glucosamine, insulin can efficiently bind to RBC membranes. The binding is reversible in the setting of hyperglycemia, resulting in fast release of insulin and subsequent drop of BG level in vivo. The delivery vehicle can be further simplified utilizing injectable polymeric nanocarriers coated with RBC membrane and loaded with Glc-Insulin. The described work is the first demonstration of utilizing RBC membrane to achieve smart insulin delivery with fast responsiveness.",
"title": ""
},
{
"docid": "4d3b988de22e4630e1b1eff9e0d4551b",
"text": "In this chapter we present a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up. We illustrate our methodology by an example from a case study on skills management. The methodology serves as a scaffold for Part B “Ontology Engineering” of the handbook. It shows where more specific concerns of ontology engineering find their place and how they are related in the overall process.",
"title": ""
},
{
"docid": "6ae289d7da3e923c1288f39fd7a162f6",
"text": "The usage of digital evidence from electronic devices has been rapidly expanding within litigation, and along with this increased usage, the reliance upon forensic computer examiners to acquire, analyze, and report upon this evidence is also rapidly growing. This growing demand for forensic computer examiners raises questions concerning the selection of individuals qualified to perform this work. While courts have mechanisms for qualifying witnesses that provide testimony based on scientific data, such as digital data, the qualifying criteria covers a wide variety of characteristics including, education, experience, training, professional certifications, or other special skills. In this study, we compare task performance responses from forensic computer examiners with an expert review panel and measure the relationship with the characteristics of the examiners to their quality responses. The results of this analysis provide insight into identifying forensic computer examiners that provide high-quality responses.",
"title": ""
}
] |
scidocsrr
|
bc5b69ea78fbccc8757f77e0a188ff0e
|
A Nonparametric Approach to Modeling Choice with Limited Data
|
[
{
"docid": "84c362cb2d4a737d7ea62d85b9144722",
"text": "This paper considers mixed, or random coeff icients, multinomial logit (MMNL) models for discrete response, and establishes the following results: Under mild regularity conditions, any discrete choice model derived from random utilit y maximization has choice probabiliti es that can be approximated as closely as one pleases by a MMNL model. Practical estimation of a parametric mixing family can be carried out by Maximum Simulated Likelihood Estimation or Method of Simulated Moments, and easily computed instruments are provided that make the latter procedure fairl y eff icient. The adequacy of a mixing specification can be tested simply as an omitted variable test with appropriately defined artificial variables. An application to a problem of demand for alternative vehicles shows that MMNL provides a flexible and computationally practical approach to discrete response analysis. Acknowledgments: Both authors are at the Department of Economics, University of Cali fornia, Berkeley CA 94720-3880. Correspondence should be directed to [email protected]. We are indebted to the E. Morris Cox fund for research support, and to Moshe Ben-Akiva, David Brownstone, Denis Bolduc, Andre de Palma, and Paul Ruud for useful comments. This paper was first presented at the University of Paris X in June 1997.",
"title": ""
}
] |
[
{
"docid": "fdfea6d3a5160c591863351395929a99",
"text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.",
"title": ""
},
{
"docid": "f0db74061a2befca317f9333a0712ab9",
"text": "This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks, along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deep learning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. There are also recent trends in physical simulation, modeling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep ()learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future.",
"title": ""
},
{
"docid": "e56bc26cd567aff51de3cb47f9682149",
"text": "Recent technological advances have expanded the breadth of available omic data, from whole-genome sequencing data, to extensive transcriptomic, methylomic and metabolomic data. A key goal of analyses of these data is the identification of effective models that predict phenotypic traits and outcomes, elucidating important biomarkers and generating important insights into the genetic underpinnings of the heritability of complex traits. There is still a need for powerful and advanced analysis strategies to fully harness the utility of these comprehensive high-throughput data, identifying true associations and reducing the number of false associations. In this Review, we explore the emerging approaches for data integration — including meta-dimensional and multi-staged analyses — which aim to deepen our understanding of the role of genetics and genomics in complex outcomes. With the use and further development of these approaches, an improved understanding of the relationship between genomic variation and human phenotypes may be revealed.",
"title": ""
},
{
"docid": "9c715e50cf36e14312407ed722fe7a7d",
"text": "Usual medical care often fails to meet the needs of chronically ill patients, even in managed, integrated delivery systems. The medical literature suggests strategies to improve outcomes in these patients. Effective interventions tend to fall into one of five areas: the use of evidence-based, planned care; reorganization of practice systems and provider roles; improved patient self-management support; increased access to expertise; and greater availability of clinical information. The challenge is to organize these components into an integrated system of chronic illness care. Whether this can be done most efficiently and effectively in primary care practice rather than requiring specialized systems of care remains unanswered.",
"title": ""
},
{
"docid": "b492a0063354a81bd99ac3f81c3fb1ec",
"text": "— Bangla automatic number plate recognition (ANPR) system using artificial neural network for number plate inscribing in Bangla is presented in this paper. This system splits into three major parts-number plate detection, plate character segmentation and Bangla character recognition. In number plate detection there arises many problems such as vehicle motion, complex background, distance changes etc., for this reason edge analysis method is applied. As Bangla number plate consists of two words and seven characters, detected number plates are segmented into individual words and characters by using horizontal and vertical projection analysis. After that a robust feature extraction method is employed to extract the information from each Bangla words and characters which is non-sensitive to the rotation, scaling and size variations. Finally character recognition system takes this information as an input to recognize Bangla characters and words. The Bangla character recognition is implemented using multilayer feed-forward network. According to the experimental result, (The abstract needs some exact figures of findings (like success rates of recognition) and how much the performance is better than previous one.) the performance of the proposed system on different vehicle images is better in case of severe image conditions.",
"title": ""
},
{
"docid": "056f5179fa5c0cdea06d29d22a756086",
"text": "Finding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century. Schröder investigated it as Auflösungsproblem (solution problem). It is closely related to the modern notion of Boolean unification. Today it is commonly presented in an algebraic setting, but seems potentially useful also in knowledge representation based on predicate logic. We show that it can be modeled on the basis of first-order logic extended by secondorder quantification. A wealth of classical results transfers, foundations for algorithms unfold, and connections with second-order quantifier elimination and Craig interpolation show up. Although for first-order inputs the set of solutions is recursively enumerable, the development of constructive methods remains a challenge. We identify some cases that allow constructions, most of them based on Craig interpolation, and show a method to take vocabulary restrictions on solution components into account. Revision: June 26, 2017",
"title": ""
},
{
"docid": "359d76f0b4f758c3a58e886e840c5361",
"text": "Cover crops are important components of sustainable agricultural systems. They increase surface residue and aid in the reduction of soil erosion. They improve the structure and water-holding capacity of the soil and thus increase the effectiveness of applied N fertilizer. Legume cover crops such as hairy vetch and crimson clover fix nitrogen and contribute to the nitrogen requirements of subsequent crops. Cover crops can also suppress weeds, provide suitable habitat for beneficial predator insects, and act as non-host crops for nematodes and other pests in crop rotations. This paper reviews the agronomic and economic literature on using cover crops in sustainable food production and reports on past and present research on cover crops and sustainable agriculture at the Beltsville Agricultural Research Center, Maryland. Previous studies suggested that the profitability of cover crops is primarily the result of enhanced crop yields rather than reduced input costs. The experiments at the Beltsville Agricultural Research Center on fresh-market tomato production showed that tomatoes grown with hairy vetch mulch were higher yielding and more profitable than those grown with black polyethylene and no mulch system. Previous studies of cover crops in grain production indicated that legume cover crops such as hairy vetch and crimson clover are more profitable than grass cover crops such as rye or wheat because of the ability of legumes to contribute N to the following crop. A com-",
"title": ""
},
{
"docid": "c4d0084aab61645fc26e099115e1995c",
"text": "Digital transformation often includes establishing big data analytics capabilities and poses considerable challenges for traditional manufacturing organizations, such as car companies. Successfully introducing big data analytics requires substantial organizational transformation and new organizational structures and business processes. Based on the three-stage evolution of big data analytics capabilities at AUDI, the full article provides recommendations for how traditional manufacturing organizations can successfully introduce big data analytics and master the related organizational transformations. Stage I: Advancing. In Stage I, AUDI’s sales and marketing department initiated data analytics projects. Commitment within the organization for data analytics grew slowly, and the strategic importance of the area was increasingly recognized. During this first stage, the IT department played a passive role, responding to the initiators of data analytics projects. The company’s digital innovation hub, however, laid the technology foundation for big data analytics during the Advancing stage. Stage II: Enabling. In Stage II, analytics competencies were built up not only in the digital innovation hub but also in the IT department. The IT department enabled big data analytics through isolated technology activities, sometimes taking on or insourcing tasks previously carried out by external consultancies or the digital innovation hub. Analytics services were developed through a more advanced technology infrastructure as well as analytics methods. Stage III: Leveraging. In the current Stage III, AUDI is leveraging the analytics competencies of the digital innovation hub and the IT department to centrally provide analytics-as-a-service. The IT department is now fully responsible for all technology tasks and is evolving to become a consulting partner for the other big data analytics stakeholders (sales and marketing department and digital innovation hub). In particular, digital services are enabled by leveraging the most valuable data source (i.e., operational car data).",
"title": ""
},
{
"docid": "0b4f44030a922ba2c970c263583e8465",
"text": "BACKGROUND\nSmoking remains one of the few potentially preventable factors associated with low birthweight, preterm birth and perinatal death.\n\n\nOBJECTIVES\nTo assess the effects of smoking cessation programs implemented during pregnancy on the health of the fetus, infant, mother, and family.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Pregnancy and Childbirth Group trials register and the Cochrane Tobacco Addiction Group trials register (July 2003), MEDLINE (January 2002 to July 2003), EMBASE (January 2002 to July 2003), PsychLIT (January 2002 to July 2003), CINAHL (January 2002 to July 2003), and AUSTHEALTH (January 2002 to 2003). We contacted trial authors to locate additional unpublished data. We handsearched references of identified trials and recent obstetric journals.\n\n\nSELECTION CRITERIA\nRandomised and quasi-randomised trials of smoking cessation programs implemented during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nFour reviewers assessed trial quality and extracted data independently.\n\n\nMAIN RESULTS\nThis review included 64 trials. Fifty-one randomised controlled trials (20,931 women) and six cluster-randomised trials (over 7500 women) provided data on smoking cessation and/or perinatal outcomes. Despite substantial variation in the intensity of the intervention and the extent of reminders and reinforcement through pregnancy, there was an increase in the median intensity of both 'usual care' and interventions over time. There was a significant reduction in smoking in the intervention groups of the 48 trials included: (relative risk (RR) 0.94, 95% confidence interval (CI) 0.93 to 0.95), an absolute difference of six in 100 women continuing to smoke. The 36 trials with validated smoking cessation had a similar reduction (RR 0.94, 95% CI 0.92 to 0.95). Smoking cessation interventions reduced low birthweight (RR 0.81, 95% CI 0.70 to 0.94) and preterm birth (RR 0.84, 95% CI 0.72 to 0.98), and there was a 33 g (95% CI 11 g to 55 g) increase in mean birthweight. There were no statistically significant differences in very low birthweight, stillbirths, perinatal or neonatal mortality but these analyses had very limited power. One intervention strategy, rewards plus social support (two trials), resulted in a significantly greater smoking reduction than other strategies (RR 0.77, 95% CI 0.72 to 0.82). Five trials of smoking relapse prevention (over 800 women) showed no statistically significant reduction in relapse.\n\n\nREVIEWERS' CONCLUSIONS\nSmoking cessation programs in pregnancy reduce the proportion of women who continue to smoke, and reduce low birthweight and preterm birth. The pooled trials have inadequate power to detect reductions in perinatal mortality or very low birthweight.",
"title": ""
},
{
"docid": "03cd67f6c96d37b6345b187382b79c44",
"text": "Social media is a vital source of information during any major event, especially natural disasters. Data produced through social networking sites is seen as ubiquitous, rapid and accessible, and it is believed to empower average citizens to become more situationally aware during disasters and coordinate to help themselves. However, with the exponential increase in the volume of social media data, so comes the increase in data that are irrelevant to a disaster, thus, diminishing peoples’ ability to find the information that they need in order to organize relief efforts, find help, and potentially save lives. In this paper, we present an approach to identifying informative messages in social media streams during disaster events. Our approach is based on Convolutional Neural Networks and shows significant improvement in performance over models that use the “bag of words” and n-grams as features on several datasets of messages from flooding events.",
"title": ""
},
{
"docid": "46f41dd784c02185e0ba2f3ee4b5c8eb",
"text": "The purpose of this study was to examine the changes in temporomandibular joint (TMJ) morphology and clinical symptoms after intraoral vertical ramus osteotomy (IVRO) with and without a Le Fort I osteotomy. Of 50 Japanese patients with mandibular prognathism with mandibular and bimaxillary asymmetry, 25 underwent IVRO and 25 underwent IVRO in combination with a Le Fort I osteotomy. The TMJ symptoms and joint morphology, including disc tissue, were assessed preoperatively and postoperatively by magnetic resonance imaging and axial cephalogram. Improvement was seen in just 50% of joints with anterior disc displacement (ADD) that received IVRO and 52% of those that received IVRO with Le Fort I osteotomy. Fewer or no TMJ symptoms were reported postoperatively in 97% of the joints that received IVRO and 90% that received IVRO with Le Fort I osteotomy. Postoperatively, there were significant condylar position changes and horizontal changes in the condylar long axis on both sides in the two groups. There were no significant differences between improved ADD and unimproved ADD in condylar position change and the angle of the condylar long axis, although distinctive postoperative condylar sag was seen. These results suggest that IVRO with or without Le Fort I osteotomy can improve ADD and TMJ symptoms along with condylar position and angle, but it is difficult to predict the amount of improvement in ADD.",
"title": ""
},
{
"docid": "3af338a01d1419189b7706375feec0c2",
"text": "Like E. Paul Torrance, my colleagues and I have tried to understand the nature of creativity, to assess it, and to improve instruction by teaching for creativity as well as teaching students to think creatively. This article reviews our investment theory of creativity, propulsion theory of creative contributions, and some of the data we have collected with regard to creativity. It also describes the propulsion theory of creative contributions. Finally, it draws",
"title": ""
},
{
"docid": "a4731b9d3bfa2813858ff9ea97668577",
"text": "Both the Swenson and the Soave procedures have been adapted as transanal approaches. Our purpose is to compare the outcomes and complications between transanal Swenson and Soave procedures.This clinical analysis involved a retrospective series of 148 pediatric patients with HD from Dec, 2001, to Dec, 2015. Perioperative/operative characteristics, postoperative complications, and outcomes between the 2 groups were analyzed. Students' t-test and chi-squared analysis were performed.In total 148 patients (Soave 69, Swenson 79) were included in our study. Mean follow-up was 3.5 years. There are no significant differences in overall hospital stay and bowel function. We noted significant differences regarding mean operating time, blood loss, and overall complications. We noted significant differences in mean operating time, blood loss, and overall complications in favor of the Swenson group when compared to the Soave group (P < 0.05).According to our results, although transanal pullthrough Swenson cannot reduce overall hospital stay and improve bowel function compared with the Soave procedure, it results in less blood loss, shorter operation time, and a lower complication rate.",
"title": ""
},
{
"docid": "2a60990e13e7983edea29b131528222d",
"text": "We propose Stereo Direct Sparse Odometry (Stereo DSO) as a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, we propose a novel approach to integrate constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods.",
"title": ""
},
{
"docid": "cc4c0a749c6a3f4ac92b9709f24f03f4",
"text": "Modern GPUs with their several hundred cores and more accessible programming models are becoming attractive devices for compute-intensive applications. They are particularly well suited for applications, such as image processing, where the end result is intended to be displayed via the graphics card. One of the more versatile and powerful graphics techniques is ray tracing. However, tracing each ray of light in a scene is very computational expensive and have traditionally been preprocessed on CPUs over hours, if not days. In this paper, Nvidia’s new OptiX ray tracing engine is used to show how the power of modern graphics cards, such as the Nvidia Quadro FX 5800, can be harnessed to ray trace several scenes that represent real-life applications in real-time speeds ranging from 20.63 to 67.15 fps. Near-perfect speedup is demonstrated on dual GPUs for scenes with complex geometries. The impact on ray tracing of the recently announced Nvidia Fermi processor, is also discussed.",
"title": ""
},
{
"docid": "508ad7d072a62433f3233d90286ef902",
"text": "The NP-hard Colorful Components problem is, given a vertex-colored graph, to delete a minimum number of edges such that no connected component contains two vertices of the same color. It has applications in multiple sequence alignment and in multiple network alignment where the colors correspond to species. We initiate a systematic complexity-theoretic study of Colorful Components by presenting NP-hardness as well as fixed-parameter tractability results for different variants of Colorful Components. We also perform experiments with our algorithms and additionally develop an efficient and very accurate heuristic algorithm clearly outperforming a previous min-cut-based heuristic on multiple sequence alignment data.",
"title": ""
},
{
"docid": "3c1db6405945425c61495dd578afd83f",
"text": "This paper describes a novel driver-support system that helps to maintain the correct speed and headway (distance) with respect to lane curvature and other vehicles ahead. The system has been developed as part of the Integrating Project PReVENT under the European Framework Programme 6, which is named SAfe SPEed and safe distaNCE (SASPENCE). The application uses a detailed description of the situation ahead of the vehicle. Many sensors [radar, video camera, Global Positioning System (GPS) and accelerometers, digital maps, and vehicle-to-vehicle wireless local area network (WLAN) connections] are used, and state-of-the-art data fusion provides a model of the environment. The system then computes a feasible maneuver and compares it with the driver's behavior to detect possible mistakes. The warning strategies are based on this comparison. The system “talks” to the driver mainly via a haptic pedal or seat belt and “listens” to the driver mainly via the vehicle acceleration. This kind of operation, i.e., the comparison between what the system thinks is possible and what the driver appears to be doing, and the consequent dialog can be regarded as simple implementations of the rider-horse metaphor (H-metaphor). The system has been tested in several situations (driving simulator, hardware in the loop, and real road tests). Objective and subjective data have been collected, revealing good acceptance and effectiveness, particularly in awakening distracted drivers. The system intervenes only when a problem is actually detected in the headway and/or speed (approaching curves or objects) and has been shown to cause prompt reactions and significant speed correction before getting into really dangerous situations.",
"title": ""
},
{
"docid": "370813b3114c8f8c2611b72876159efe",
"text": "Sciatic nerve structure and nomenclature: epineurium to paraneurium is this a new paradigm? We read with interest the study by Perlas et al., (1) about the sciatic nerve block at the level of its division in the popliteal fossa. We have been developing this technique in our routine practice during the past 7 years and have no doub about the effi cacy and safety of this approach (2,3). However, we do not agree with the author's defi nition of the structure and limits of the nerve. Given the impact of publications from the principal author's research group on the regional anesthesia community, we are compelled to comment on proposed terminology that we feel may create confusion and contribute to the creation of a new paradigm in peripheral nerve blockade. The peripheral nerve is a well-defi ned anatomical entity with an unequivocal histological structure (Figure 1). The fascicle is the noble and functional unit of the nerves. Fascicles are constituted by a group of axons covered individually by the endoneurium and tightly packed within the perineurium. The epineurium comprises all the tissues that hold and surround the fascicles and defi nes the macroscopic external limit of the nerve. Epineurium includes loose connective and adipose tissue and epineurial vessels. Fascicles can be found as isolated units or in groups of fascicles supported and held together into a mixed collagen and fat tissue in different proportions (within the epineurial cover). The epineurium cover is the thin layer of connective tissue that encloses the whole structure and constitutes the anatomical limit of the nerve. It acts as a mechanical barrier (limiting the spread of injected local anesthetic), but not as a physical barrier (allowing the passive diffusion of local anesthetic along the concentration gradient). The paraneurium is the connective tissue that supports and connects the nerve with the surrounding structures (eg, muscles, bone, joints, tendons, and vessels) and acts as a gliding layer. We agree that the limits of the epineurium of the sciatic nerve, like those of the brachial plexus, are more complex than in single nerves. Therefore, the sciatic nerve block deserves special consideration. If we accept that the sciatic nerve is an anatomical unit, the epineurium should include the groups of fascicles that will constitute the tibial and the common peroneal nerves. Similarly, the epineurium of the common peroneal nerve contains the fascicles that will be part of the lateral cutane-ous, …",
"title": ""
},
{
"docid": "3a942985eb615f459a670ada83ce3a41",
"text": "A new method of realising RF barcodes is presented using arrays of identical microstrip dipoles capacitively tuned to be resonant at different frequencies within the desired licensed-free ISM bands. When interrogated, the reader detects each dipole's resonance frequency and with n resonant dipoles, potentially 2/sup n/-1 items in the field can be tagged and identified. Results for RF barcode elements in the 5.8 GHz band are presented. It is shown that with accurate centre frequency prediction and by operating over multiple ISM and other license-exempt bands, a useful number of information bits can be realised. Further increase may be possible using ultra-wideband (UWB) technology. Low cost lithographic printing techniques based on using metal ink on low cost substrates could lead to an economical alternative to current RFID systems in many applications.",
"title": ""
},
{
"docid": "bd47b468b1754ddd9fecf8620eb0b037",
"text": "Common bean (Phaseolus vulgaris) is grown throughout the world and comprises roughly 50% of the grain legumes consumed worldwide. Despite this, genetic resources for common beans have been lacking. Next generation sequencing, has facilitated our investigation of the gene expression profiles associated with biologically important traits in common bean. An increased understanding of gene expression in common bean will improve our understanding of gene expression patterns in other legume species. Combining recently developed genomic resources for Phaseolus vulgaris, including predicted gene calls, with RNA-Seq technology, we measured the gene expression patterns from 24 samples collected from seven tissues at developmentally important stages and from three nitrogen treatments. Gene expression patterns throughout the plant were analyzed to better understand changes due to nodulation, seed development, and nitrogen utilization. We have identified 11,010 genes differentially expressed with a fold change ≥ 2 and a P-value < 0.05 between different tissues at the same time point, 15,752 genes differentially expressed within a tissue due to changes in development, and 2,315 genes expressed only in a single tissue. These analyses identified 2,970 genes with expression patterns that appear to be directly dependent on the source of available nitrogen. Finally, we have assembled this data in a publicly available database, The Phaseolus vulgaris Gene Expression Atlas (Pv GEA), http://plantgrn.noble.org/PvGEA/ . Using the website, researchers can query gene expression profiles of their gene of interest, search for genes expressed in different tissues, or download the dataset in a tabular form. These data provide the basis for a gene expression atlas, which will facilitate functional genomic studies in common bean. Analysis of this dataset has identified genes important in regulating seed composition and has increased our understanding of nodulation and impact of the nitrogen source on assimilation and distribution throughout the plant.",
"title": ""
}
] |
scidocsrr
|
d4c7e1dfe55118c0633b905bc737cc53
|
Lifelong Generative Modeling
|
[
{
"docid": "e2009f56982f709671dcfe43048a8919",
"text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.",
"title": ""
}
] |
[
{
"docid": "49e824c73b62d4c05b28fbd46fde1a28",
"text": "The Advent of Internet-of-Things (IoT) paradigm has brought exciting opportunities to solve many real-world problems. IoT in industries is poised to play an important role not only to increase productivity and efficiency but also to improve customer experiences. Two main challenges that are of particular interest to industry include: handling device heterogeneity and getting contextual information to make informed decisions. These challenges can be addressed by IoT along with proven technologies like the Semantic Web. In this paper, we present our work, SQenIoT: a Semantic Query Engine for Industrial IoT. SQenIoT resides on a commercial product and offers query capabilities to retrieve information regarding the connected things in a given facility. We also propose a things query language, targeted for resource-constrained gateways and non-technical personnel such as facility managers. Two other contributions include multi-level ontologies and mechanisms for semantic tagging in our commercial products. The implementation details of SQenIoT and its performance results are also presented.",
"title": ""
},
{
"docid": "ad4d38ee8089a67353586abad319038f",
"text": "State-of-the-art systems of Chinese Named Entity Recognition (CNER) require large amounts of hand-crafted features and domainspecific knowledge to achieve high performance. In this paper, we apply a bidirectional LSTM-CRF neural network that utilizes both characterlevel and radical-level representations. We are the first to use characterbased BLSTM-CRF neural architecture for CNER. By contrasting the results of different variants of LSTM blocks, we find the most suitable LSTM block for CNER. We are also the first to investigate Chinese radical-level representations in BLSTM-CRF architecture and get better performance without carefully designed features. We evaluate our system on the third SIGHAN Bakeoff MSRA data set for simplfied CNER task and achieve state-of-the-art performance 90.95% F1.",
"title": ""
},
{
"docid": "8213f9488af8e1492d7a4ac2eec3a573",
"text": "The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a prime example of highdimensional non-convex problem. Some insights were recently gained using spin glass models and mean-field approximations, but at the expense of strongly simplifying the nonlinear nature of the model. In this work, we do not make any such assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. Our theoretical work quantifies and formalizes two important folklore facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay. The conditioning of gradient descent is the next challenge we address. We study this question through the geometry of the level sets, and we introduce an algorithm to efficiently estimate the regularity of such sets on large-scale networks. Our empirical results show that these level sets remain connected throughout all the learning phase, suggesting a near convex behavior, but they become exponentially more curvy as the energy level decays, in accordance to what is observed in practice with very low curvature attractors.",
"title": ""
},
{
"docid": "bd3f7e9fe1637a52adcf11aefc58f9aa",
"text": "Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert’s driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress – the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the ChauffeurNet model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world.",
"title": ""
},
{
"docid": "d7e0b50d818ab031c40763dd869c5615",
"text": "Visualization has become a valuable means for data exploration and analysis. Interactive visualization combines expressive graphical representations and effective user interaction. Although interaction is an important component of visualization approaches, much of the visualization literature tends to pay more attention to the graphical representation than to interaction. e goal of this work is to strengthen the interaction side of visualization. Based on a brief review of general aspects of interaction, we develop an interaction-oriented view on visualization. is view comprises five key aspects: the data, the tasks, the technology, the human, as well as the implementation. Picking up these aspects individually, we elaborate several interaction methods for visualization. We introduce a multi-threading architecture for efficient interactive exploration. We present interaction techniques for different types of data (e.g., multivariate data, spatio-temporal data, graphs) and different visualization tasks (e.g., exploratory navigation, visual comparison, visual editing). With respect to technology, we illustrate approaches that utilize modern interaction modalities (e.g., touch, tangibles, proxemics) as well as classic ones. While the human is important throughout this work, we also consider automatic methods to assist the interactive part. In addition to solutions for individual problems, a major contribution of this work is the overarching view of interaction in visualization as a whole. is includes a critical discussion of interaction, the identification of links between the key aspects of interaction, and the formulation of research topics for future work with a focus on interaction.",
"title": ""
},
{
"docid": "e84b6bbb2eaee0edb6ac65d585056448",
"text": "As memory accesses become slower with respect to the processor and consume more power with increasing memory size, the focus of memory performance and power consumption has become increasingly important. With the trend to develop multi-threaded, multi-core processors, the demands on the memory system will continue to scale. However, determining the optimal memory system configuration is non-trivial. The memory system performance is sensitive to a large number of parameters. Each of these parameters take on a number of values and interact in fashions that make overall trends difficult to discern. A comparison of the memory system architectures becomes even harder when we add the dimensions of power consumption and manufacturing cost. Unfortunately, there is a lack of tools in the public-domain that support such studies. Therefore, we introduce DRAMsim, a detailed and highly-configurable C-based memory system simulator to fill this gap. DRAMsim implements detailed timing models for a variety of existing memories, including SDRAM, DDR, DDR2, DRDRAM and FB-DIMM, with the capability to easily vary their parameters. It also models the power consumption of SDRAM and its derivatives. It can be used as a standalone simulator or as part of a more comprehensive system-level model. We have successfully integrated DRAMsim into a variety of simulators including MASE [15], Sim-alpha [14], BOCHS[2] and GEMS[13]. The simulator can be downloaded from www.ece.umd.edu/dramsim.",
"title": ""
},
{
"docid": "2a5921cd4554caaa9eb6fd397088ecec",
"text": "This work examines how a classifier's output of a cut-in prediction can be mapped to a (semi-) automated car's reaction to it. Several approaches of decision making are compared using real world data of a lane change predictor for an automated longitudinal guidance system, similar to an adaptive cruise control system, as an example. We show how the decision algorithms affect the time when a new lead vehicle is selected and how much more comfortable we can decelerate given different selection strategies. We propose a novel decision algorithm and conducted a case study with a prototype research car to evaluate the subjective quality of the different approaches.",
"title": ""
},
{
"docid": "8dfd91ceadfcceea352975f9b5958aaf",
"text": "The bag-of-words representation commonly used in text analysis can be analyzed very efficiently and retains a great deal of useful information, but it is also troublesome because the same thought can be expressed using many different terms or one term can have very different meanings. Dimension reduction can collapse together terms that have the same semantics, to identify and disambiguate terms with multiple meanings and to provide a lower-dimensional representation of documents that reflects concepts instead of raw terms. In this chapter, we survey two influential forms of dimension reduction. Latent semantic indexing uses spectral decomposition to identify a lower-dimensional representation that maintains semantic properties of the documents. Topic modeling, including probabilistic latent semantic indexing and latent Dirichlet allocation, is a form of dimension reduction that uses a probabilistic model to find the co-occurrence patterns of terms that correspond to semantic topics in a collection of documents. We describe the basic technologies in detail and expose the underlying mechanism. We also discuss recent advances that have made it possible to apply these techniques to very large and evolving text collections and to incorporate network structure or other contextual information.",
"title": ""
},
{
"docid": "9ddd90ac97b6c3727d9f4f69d44bb873",
"text": "In her 2011 EVT/WOTE keynote, Travis County, Texas County Clerk Dana DeBeauvoir described the qualities she wanted in her ideal election system to replace their existing DREs. In response, in April of 2012, the authors, working with DeBeauvoir and her staff, jointly architected STAR-Vote, a voting system with a DRE-style human interface and a “belt and suspenders” approach to verifiability. It provides both a paper trail and end-toend cryptography using COTS hardware. It is designed to support both ballot-level risk-limiting audits, and auditing by individual voters and observers. The human interface and process flow is based on modern usability research. This paper describes the STAR-Vote architecture, which could well be the next-generation voting system for Travis County and perhaps elsewhere. This paper is a working draft. Significant changes should be expected as the STAR-Vote effort matures.",
"title": ""
},
{
"docid": "b15ed1584eb030fba1ab3c882983dbf0",
"text": "The need for automated grading tools for essay writing and open-ended assignments has received increasing attention due to the unprecedented scale of Massive Online Courses (MOOCs) and the fact that more and more students are relying on computers to complete and submit their school work. In this paper, we propose an efficient memory networks-powered automated grading model. The idea of our model stems from the philosophy that with enough graded samples for each score in the rubric, such samples can be used to grade future work that is found to be similar. For each possible score in the rubric, a student response graded with the same score is collected. These selected responses represent the grading criteria specified in the rubric and are stored in the memory component. Our model learns to predict a score for an ungraded response by computing the relevance between the ungraded response and each selected response in memory. The evaluation was conducted on the Kaggle Automated Student Assessment Prize (ASAP) dataset. The results show that our model achieves state-of-the-art performance in 7 out of 8 essay sets.",
"title": ""
},
{
"docid": "b484d05525e016dfc834754568030a42",
"text": "This study examines the academic abilities of children and adolescents who were once diagnosed with an autism spectrum disorder, but who no longer meet diagnostic criteria for this disorder. These individuals have achieved social and language skills within the average range for their ages, receive little or no school support, and are referred to as having achieved \"optimal outcomes.\" Performance of 32 individuals who achieved optimal outcomes, 41 high-functioning individuals with a current autism spectrum disorder diagnosis (high-functioning autism), and 34 typically developing peers was compared on measures of decoding, reading comprehension, mathematical problem solving, and written expression. Groups were matched on age, sex, and nonverbal IQ; however, the high-functioning autism group scored significantly lower than the optimal outcome and typically developing groups on verbal IQ. All three groups performed in the average range on all subtests measured, and no significant differences were found in performance of the optimal outcome and typically developing groups. The high-functioning autism group scored significantly lower on subtests of reading comprehension and mathematical problem solving than the optimal outcome group. These findings suggest that the academic abilities of individuals who achieved optimal outcomes are similar to those of their typically developing peers, even in areas where individuals who have retained their autism spectrum disorder diagnoses exhibit some ongoing difficulty.",
"title": ""
},
{
"docid": "c35a4278aa4a084d119238fdd68d9eb6",
"text": "ARM TrustZone, which provides a Trusted Execution Environment (TEE), normally plays a role in keeping security-sensitive resources safe. However, to properly control access to the resources, it is not enough to just isolate them from the Rich Execution Environment (REE). In addition to the isolation, secure communication should be guaranteed between security-critical resources in the TEE and legitimate REE processes that are permitted to use them. Even though there is a TEE security solution — namely, a kernel-integrity monitor — it aims to protect the REE kernel’s static regions, not to secure communication between the REE and TEE. We propose SeCReT to ameliorate this problem. SeCReT is a framework that builds a secure channel between the REE and TEE by enabling REE processes to use session keys in the REE that is regarded as unsafe region. SeCReT provides the session key to a requestor process only when the requestor’s code and control flow integrity are verified. To prevent the key from being exposed to an attacker who already compromised the REE kernel, SeCReT flushes the key from the memory every time the processor switches into kernel mode. In this paper, we present the design and implementation of SeCReT to show how it protects the key in the REE. Our prototype is implemented on Arndale board, which offers a Cortex-A15 dual-core processor with TrustZone as its security extension. We performed a security analysis by using a kernel rootkit and also ran LMBench microbenchmark to evaluate the performance overhead imposed by SeCReT.",
"title": ""
},
{
"docid": "d0fc352e347f7df09140068a4195eb9e",
"text": "A wave of alternative coins that can be effectively mined without specialized hardware, and a surge in cryptocurrencies' market value has led to the development of cryptocurrency mining ( cryptomining ) services, such as Coinhive, which can be easily integrated into websites to monetize the computational power of their visitors. While legitimate website operators are exploring these services as an alternative to advertisements, they have also drawn the attention of cybercriminals: drive-by mining (also known as cryptojacking ) is a new web-based attack, in which an infected website secretly executes JavaScript code and/or a WebAssembly module in the user's browser to mine cryptocurrencies without her consent. In this paper, we perform a comprehensive analysis on Alexa's Top 1 Million websites to shed light on the prevalence and profitability of this attack. We study the websites affected by drive-by mining to understand the techniques being used to evade detection, and the latest web technologies being exploited to efficiently mine cryptocurrency. As a result of our study, which covers 28 Coinhive-like services that are widely being used by drive-by mining websites, we identified 20 active cryptomining campaigns. Motivated by our findings, we investigate possible countermeasures against this type of attack. We discuss how current blacklisting approaches and heuristics based on CPU usage are insufficient, and present MineSweeper, a novel detection technique that is based on the intrinsic characteristics of cryptomining code, and, thus, is resilient to obfuscation. Our approach could be integrated into browsers to warn users about silent cryptomining when visiting websites that do not ask for their consent.",
"title": ""
},
{
"docid": "70e3a918cb152278360c2c54a8934b2c",
"text": "In translation, considering the document as a whole can help to resolve ambiguities and inconsistencies. In this paper, we propose a cross-sentence context-aware approach and investigate the influence of historical contextual information on the performance of neural machine translation (NMT). First, this history is summarized in a hierarchical way. We then integrate the historical representation into NMT in two strategies: 1) a warm-start of encoder and decoder states, and 2) an auxiliary context source for updating decoder states. Experimental results on a large Chinese-English translation task show that our approach significantly improves upon a strong attention-based NMT system by up to +2.1 BLEU points.",
"title": ""
},
{
"docid": "2e08d9509b4dc7b75eb311d49dd1e6ca",
"text": "The use of memory forensic techniques has the potential to enhance computer forensic investigations. The analysis of digital evidence is facing several key challenges; an increase in electronic devices, network connections and bandwidth, the use of anti-forensic technologies and the development of network centric applications and technologies has lead to less potential evidence stored on static media and increased amounts of data stored off-system. Memory forensic techniques have the potential to overcome these issues in forensic analysis. While much of the current research in memory forensics has been focussed on low-level data, there is a need for research to extract high-level data from physical memory as a means of providing forensic investigators with greater insight into a target system. This paper outlines the need for further research into memory forensic techniques. In particular it stresses the need for methods and techniques for understanding context on a system and also as a means of augmenting other data sources to provide a more complete and efficient searching of investigations.",
"title": ""
},
{
"docid": "cfafeb416d45b77dd3c9e6a94bfb5049",
"text": " Choosing the best genetic strains of mice for developing a new knockout or transgenic mouse requires extensive knowledge of the endogenous traits of inbred strains. Background genes from the parental strains may interact with the mutated gene, in a manner which could severely compromise the interpretation of the mutant phenotype. The present overview summarizes the literature on a wide variety of behavioral traits for the 129, C57BL/6, DBA/2, and many other inbred strains of mice. Strain distributions are described for open field activity, learning and memory tasks, aggression, sexual and parental behaviors, acoustic startle and prepulse inhibition, and the behavioral actions of ethanol, nicotine, cocaine, opiates, antipsychotics, and anxiolytics. Using the referenced information, molecular geneticists can choose optimal parental strains of mice, and perhaps develop new embryonic stem cell progenitors, for new knockouts and transgenics to investigate gene function, and to serve as animal models in the development of novel therapeutics for human genetic diseases.",
"title": ""
},
{
"docid": "c0f4eda55d0d1021e8f15e34dd62268d",
"text": "This paper presents recent results of small pixel development for different applications and discusses optical and electrical characteristics of small pixels along with their respective images. Presented are basic optical and electrical characteristics of pixels with sizes in the range from 2.2μm to 1.1μm,. The paper provides a comparison of front side illumination (FSI) with back side illumination (BSI) technology and considers tradeoffs and applicability of each technology for different pixel sizes. Additional functionalities that can be added to pixel arrays with small pixel, in particular high dynamic range capabilities are also discussed. 1. FSI and BSI technology development Pixel shrinking is the common trend in image sensors for all areas of consumer electronics, including mobile imaging, digital still and video cameras, PC cameras, automotive, surveillance, and other applications. In mobile and digital still camera (DSC) applications, 1.75μm and 1.4μm pixels are widely used in production. Designers of image sensors are actively working on super-small 1.1μm and 0.9um pixels. In high-end DSC cameras with interchangeable lenses, pixel size reduces from the range of 5 – 6 μm to 3 – 4 μm, and even smaller. With very high requirements for angular pixel performance, this results in similar or even bigger challenges as for sub 1.4μm pixels. Altogether, pixel size reduction in all imaging areas has been the most powerful driving force for new technologies and innovations in pixel development. Aptina continues to develop FSI AptinaTM A-PixTM technology for pixel sizes of 1.4μm and bigger. Figures 1a and 1b illustrate a comparison of a regular pixel for a CMOS imager with Aptina’s A-Pix technology. Adding a light guide (LG) and extending the depth of the photodiode (PD) allow significant reduction of both optical and electrical crosstalk, thus significantly boosting pixel performance [1]. A-Pix technology has become a mature manufacturing process that provides high pixel performance with lower wafer cost compared to BSI technology. The latest efforts in developing A-Pix technology were focused on improving symmetry of the pixel, which resulted in extremely low optical cross-talk, reduced green imbalance and color shading. Improvements stem from improvements in the design and manufacturing of LG, along with the structure of Si PD. LG allows one to compensate for pixel asymmetry (at least its optical part) thus providing both optimal utilization of Si area, and minimal green imbalance / color shading. Figure 2 shows an example of green imbalance for 5Mpix sensors with 1.4μm pixels size designed for 27degree max CRA of the lens. Improvement of the LG design reduces green imbalance by more than 7x. BSI technology allows further reduction of pixel size to extremely small 1.1μm and 0.9μm, and more symmetrical pixel design for larger pixel nodes. Similar to A-Pix, the use of back side illumination in pixel design allows significant reduction of optical and electrical crosstalk, as illustrated in Figure 1c. Both BSI and Aptina Apix technology use the 90nm gate and 65nm pixel manufacturing process. Aptina’s BSI technology uses cost-effective P-EPI on P+ bulk silicon as starting wafers. The wafers receive normal FSI CMOS process with skipping some FSI p modules. Front side alignment marks are added for later backside alignments. The device wafers are bonded to BSI carrier wafers, and are thinned down to a few microns thick through wafer back side grinding, selective wet etch, and chemical-mechanical planarization process. The wafer thickness is matched to front side PD depth to reduce cross-talk. Finally, anti-reflective coatings are applied to backside silicon surface and micro-lens to increase pixel QE. Figure 3 shows normalized quantum efficiency spectral characteristics of 1.1μm BSI pixels. Pixels exhibit high QE for all 3 colors and small crosstalk that benefit overall image quality. Figure 4 presents luminance SNR plots for 1.4μm FSI and BSI pixels and 1.1μm BSI pixel. Due to advances of A-Pix technology, characteristics of FSI and BSI 1.4μm pixel are close, with the BSI pixel slightly outperforming FSI pixel, especially at very high CRA. However, the difference in performance is much smaller compared to conventional FSI pixel. For 1.1μm pixels, BSI technology definitely plays a key role in achieving high pixel performance. Major pixel photoelectrical characteristics are presented in Table 1. 2. Image quality of sensors with equal optical format Figure 5 presents SNR10 metrics for different pixel size inversely normalized per pixel area scene illumination at which luminance SNR is equal to 10x for specified lens conditions, integration time, and color correction matrix. As can be seen from the plot, the latest generation of pixels provides SNR10 performance that is scaled to the area, and as a result, provides the same image quality at the same optical format for the mid level of exposures. The latest generation of pixels with the size of (1.1μm – 2.2μm) in Figure 5 uses advances of A-pix technology to boost pixel performance. Many products for mobile and DSC applications use 1.4μm pixel; the latest generations of 1.75μm, 1.9μm, and 2.2μm are in mass production both for still shot and video-centric 2D and 3D applications. Bringing the latest technology to the large 5.6μm pixel has allowed us to significantly boost performance of that pixel (shown as a second bar of Figure 5 for 5.6μm pixel) for automotive applications. As was mentioned earlier, BSI technology furthers the extension of array size for the optical formats. The latest addition to the mainstream mobile cameras with 1⁄4‖ optical format is 8Mpix image sensor with 1.1μm pixels size. Figure 6 compares images from the previous 5Mpix sensor with 1⁄4‖ optical format with 1.4μm pixel size with images from the new 8Mpix sensor with 1.1μm pixel that fits into the same 1/4‖ optical format. Images were taken from the scene with ~100 lux illumination at 67ms integration time and typical f/2.8 lens for mobile applications. Zoomed fragments of the images with 100% zoom for 5Mpix sensor show very comparable quality of the images and confirm that similar image quality for a given optical format results when pixel performance that is scaled to the area continues to be the same. Figure 4 shows also the lowest achievable SNR10 for 1.4μm pixel at similar conditions for the ideal case of QE equal to 100% for all colors and no optical or electrical crosstalk – color overlaps are defined only by color filters. The shape of color filters is taken from large pixel sensor for high-end DSC application and assumes very good color reproduction. It is interesting to see that current 1.4μm pixel has only 40% lower SNR at conditions close to first acceptable image, SNR10 [2]. 3. Additional functionality for arrays with small pixels With the diffraction limits of imaging lenses, the minimum resolvable feature size (green light, Rayleigh limit) for an fnumber 2.8 lens is around 1.8 microns [3]. As pixel sizes continue to shrink below 1.8 microns, the image field produced from the optics is oversampled and system MTF does not continue to show scaled improvement based on increased frequency pixel sampling. How can we take advantage of increased frequency pixel sampling then? High Dynamic Range. Humans have the ability to gaze upon a fixed scene and clearly see very bright and dark objects simultaneously. The typical maximum brightness range visible by humans within a fixed scene is about 10,000 to 1 or 80dB [4]. Mobile and digital still cameras often struggle to match the intra-scene dynamic range of the human visual system and can’t capture high range scenes (50-80dB) primarily because the pixels in the camera’s sensors have a linear response and limited well capacities. HDR image capture technology can address the problem of limited dynamic range in today’s camera. However, a low cost technique that provides adequate performance for still and video applications is needed. Frame Multi-exposure HDR. The frame multi-exposure technique, otherwise known as exposure bracketing, is widely used in the industry to capture several photos of a scene and combine them into an HDR photo. Although this technique is simple, effective, and available to anyone with a camera with exposure control, the drawbacks relegate this technique to still scene photography and frame buffer-based post processing. If an HDR camera system is desired that doesn’t require frame memory and can reduce motion artifacts to a level where video capture is possible, the common image sensor architecture used in most cameras today must be changed. Can we use smaller pixels to provide multi-exposure HDR that doesn’t require frame memory for photos and reduces motion artifacts and allows video capture? Interleaved HDR Capture. With pixel size reduction there is an opportunity to take advantage of the diffraction limits of camera optical systems by spatially interleaving pixels with differing exposure time controls to achieve multi-exposure capture. Figure 7 shows an example of a dual exposure capture system using interleaved exposures within a standard Bayer pattern. This form of intra-frame multi-exposure HDR capture can be easily incorporated into standard CMOS sensors and doesn’t require the additional readout speed or large memories. The tradeoff of interleaving the exposures is that fewer pixels are available for each exposure image and can affect the overall captured image resolution. This is where the advantage of small pixels comes into play: as pixels shrink below the diffraction limit, the system approaches being oversampled such that the MTF doesn’t improve proportionally to pixel size. We propose that greater gain in overall image quality may be achieved by spatially sampling different exposures to capture higher scene quality rather than oversampling the image. In Figure 7, pairs of rows are used for each exposure to ens",
"title": ""
},
{
"docid": "d518f1b11f2d0fd29dcef991afe17d17",
"text": "Applications must be able to synchronize accesses to operating system resources in order to ensure correctness in the face of concurrency and system failures. System transactions allow the programmer to specify updates to heterogeneous system resources with the OS guaranteeing atomicity, consistency, isolation, and durability (ACID). System transactions efficiently and cleanly solve persistent concurrency problems that are difficult to address with other techniques. For example, system transactions eliminate security vulnerabilities in the file system that are caused by time-of-check-to-time-of-use (TOCTTOU) race conditions. System transactions enable an unsuccessful software installation to roll back without disturbing concurrent, independent updates to the file system.\n This paper describes TxOS, a variant of Linux 2.6.22 that implements system transactions. TxOS uses new implementation techniques to provide fast, serializable transactions with strong isolation and fairness between system transactions and non-transactional activity. The prototype demonstrates that a mature OS running on commodity hardware can provide system transactions at a reasonable performance cost. For instance, a transactional installation of OpenSSH incurs only 10% overhead, and a non-transactional compilation of Linux incurs negligible overhead on TxOS. By making transactions a central OS abstraction, TxOS enables new transactional services. For example, one developer prototyped a transactional ext3 file system in less than one month.",
"title": ""
},
{
"docid": "36d7f776d7297f67a136825e9628effc",
"text": "Random walks are at the heart of many existing network embedding methods. However, such algorithms have many limitations that arise from the use of random walks, e.g., the features resulting from these methods are unable to transfer to new nodes and graphs as they are tied to vertex identity. In this work, we introduce the Role2Vec framework which uses the flexible notion of attributed random walks, and serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks. Our proposed framework enables these methods to be more widely applicable for both transductive and inductive learning as well as for use on graphs with attributes (if available). This is achieved by learning functions that generalize to new nodes and graphs. We show that our proposed framework is effective with an average AUC improvement of 16.55% while requiring on average 853x less space than existing methods on a variety of graphs.",
"title": ""
}
] |
scidocsrr
|
deefdb6e5bce6cd80d5f5d349a92c5f2
|
MoFAP: A Multi-level Representation for Action Recognition
|
[
{
"docid": "c439a5c8405d8ba7f831a5ac4b1576a7",
"text": "1. Cao, L., Liu, Z., Huang, T.S.: Cross-dataset action detection. In: CVPR (2010). 2. Yang, Y., Ramanan, D.: Articulated pose estimation with flexible mixtures-of-parts. In: CVPR (2011) 3. Lan, T., etc.: Discriminative figure-centric models for joint action localization and recognition. In: ICCV (2011). 4. Tian, Y., Sukthankar, R., Shah, M.: Spatiotemporal deformable part models for action detection. In: CVPR (2013). 5. Wang, H., Schmid, C.: Action recognition with improved trajectories. In: ICCV (2013). Experiments",
"title": ""
},
{
"docid": "112fc675cce705b3bab9cb66ca1c08da",
"text": "Our Approach, 0.66 GIST 29.7 Spa>al Pyramid HOG 29.8 Spa>al Pyramid SIFT 34.4 ROI-‐GIST 26.5 Scene DPM 30.4 MM-‐Scene 28.0 Object Bank 37.6 Ours 38.1 Ours+GIST 44.0 Ours+SP 46.4 Ours+GIST+SP 47.5 Ours+DPM 42.4 Ours+GIST+DPM 46.9 Ours+SP+DPM 46.4 GIST+SP+DPM 43.1 Ours+GIST+SP+DPM 49.4 Two key requirements • representa,ve: Need to occur frequently enough • discrimina,ve: Need to be different enough from the rest of the “visual world” Goal: a mid-‐level visual representa>on Experimental Analysis Bonus: works even be`er if weakly supervised!",
"title": ""
},
{
"docid": "2e8251644f82f3a965cf6360416eaaaa",
"text": "The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) \"actions\" and 2) \"activities.\" \"Actions\" are characterized by simple motion patterns typically executed by a single human. \"Activities\" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.",
"title": ""
},
{
"docid": "52bce24f8ec738f9b9dfd472acd6b101",
"text": "Human action recognition in videos is a challenging problem with wide applications. State-of-the-art approaches often adopt the popular bag-of-features representation based on isolated local patches or temporal patch trajectories, where motion patterns like object relationships are mostly discarded. This paper proposes a simple representation specifically aimed at the modeling of such motion relationships. We adopt global and local reference points to characterize motion information, so that the final representation can be robust to camera movement. Our approach operates on top of visual codewords derived from local patch trajectories, and therefore does not require accurate foreground-background separation, which is typically a necessary step to model object relationships. Through an extensive experimental evaluation, we show that the proposed representation offers very competitive performance on challenging benchmark datasets, and combining it with the bag-of-features representation leads to substantial improvement. On Hollywood2, Olympic Sports, and HMDB51 datasets, we obtain 59.5%, 80.6% and 40.7% respectively, which are the best reported results to date.",
"title": ""
}
] |
[
{
"docid": "aa5a0018ae771cf6cfbca628b5d1e1fd",
"text": "Cloud computing discusses about sharing any imaginable entity such as process units, storage devices or software. The provided service is utterly economical and expandable. Cloud computing attractive benefits entice huge interest of both business owners and cyber thefts. Consequently, the “computer forensic investigation” step into the play to find evidences against criminals. As a result of the new technology and methods used in cloud computing, the forensic investigation techniques face different types of issues while inspecting the case. The most profound challenges are difficulties to deal with different rulings obliged on variety of data saved in different locations, limited access to obtain evidences from cloud and even the issue of seizing the physical evidence for the sake of integrity validation or evidence presentation. This paper suggests a simple yet very useful solution to conquer the aforementioned issues in forensic investigation of cloud systems. Utilizing TPM in hypervisor, implementing multi-factor authentication and updating the cloud service provider policy to provide persistent storage devices are some of the recommended solutions. Utilizing the proposed solutions, the cloud service will be compatible to the current digital forensic investigation practices; alongside it brings the great advantage of being investigable and consequently the trust of the client.",
"title": ""
},
{
"docid": "1a6ece40fa87e787f218902eba9b89f7",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
},
{
"docid": "29f17b7d7239a2845d513976e4981d6a",
"text": "Agriculture is the backbone of the Indian economy. As all know that demand of agricultural products are increasing day by day as the population is ever increasing, so there is a need to minimize labor, limit the use of water and increase the production of crops. So there is a need to switch from traditional agriculture to the modern agriculture. The introduction of internet of things into agriculture modernization will help solve these problems. This paper presents the IOT based agriculture production system which will monitor or analyze the crop environment like temperature humidity and moisture content in soil. This paper uses the integration of RFID technology and sensors. As both have different objective sensors are for sensing and RIFD technology is for identification This will effectively solve the problem of farmer, increase the yield and saves his time, power, money.",
"title": ""
},
{
"docid": "53aa1145047cc06a1c401b04896ff1b1",
"text": "Due to the increasing availability of whole slide scanners facilitating digitization of histopathological tissue, there is a strong demand for the development of computer based image analysis systems. In this work, the focus is on the segmentation of the glomeruli constituting a highly relevant structure in renal histopathology, which has not been investigated before in combination with CNNs. We propose two different CNN cascades for segmentation applications with sparse objects. These approaches are applied to the problem of glomerulus segmentation and compared with conventional fully-convolutional networks. Overall, with the best performing cascade approach, single CNNs are outperformed and a pixel-level Dice similarity coefficient of 0.90 is obtained. Combined with qualitative and further object-level analyses the obtained results are assessed as excellent also compared to recent approaches. In conclusion, we can state that especially one of the proposed cascade networks proved to be a highly powerful tool for segmenting the renal glomeruli providing best segmentation accuracies and also keeping the computing time at a low level.",
"title": ""
},
{
"docid": "82535c102f41dc9d47aa65bd71ca23be",
"text": "We report on an experiment that examined the influence of anthropomorphism and perceived agency on presence, copresence, and social presence in a virtual environment. The experiment varied the level of anthropomorphism of the image of interactants: high anthropomorphism, low anthropomorphism, or no image. Perceived agency was manipulated by telling the participants that the image was either an avatar controlled by a human, or an agent controlled by a computer. The results support the prediction that people respond socially to both human and computer-controlled entities, and that the existence of a virtual image increases tele-presence. Participants interacting with the less-anthropomorphic image reported more copresence and social presence than those interacting with partners represented by either no image at all or by a highly anthropomorphic image of the other, indicating that the more anthropomorphic images set up higher expectations that lead to reduced presence when these expectations were not met.",
"title": ""
},
{
"docid": "e9b7eba9f15440ec7112a1938fad1473",
"text": "Recovery is not a new concept within mental health, although in recent times, it has come to the forefront of the policy agenda. However, there is no universal definition of recovery, and it is a contested concept. The aim of this study was to examine the British literature relating to recovery in mental health. Three contributing groups are identified: service users, health care providers and policy makers. A review of the literature was conducted by accessing all relevant published texts. A search was conducted using these terms: 'recovery', 'schizophrenia', 'psychosis', 'mental illness' and 'mental health'. Over 170 papers were reviewed. A thematic analysis was conducted. Six main themes emerged, which were examined from the perspective of the stakeholder groups. The dominant themes were identity, the service provision agenda, the social domain, power and control, hope and optimism, risk and responsibility. Consensus was found around the belief that good quality care should be made available to service users to promote recovery both as inpatient or in the community. However, the manner in which recovery was defined and delivered differed between the groups.",
"title": ""
},
{
"docid": "acbdb3f3abf3e56807a4e7f60869a2ee",
"text": "In this paper we present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multi-stereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multi-grid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the apparent contours of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.",
"title": ""
},
{
"docid": "e7a51207dd5119ad22fbf35a7b4afca7",
"text": "AIM\nTo characterize types of university students based on satisfaction with life domains that affect eating habits, satisfaction with food-related life and subjective happiness.\n\n\nMATERIALS AND METHODS\nA questionnaire was applied to a nonrandom sample of 305 students of both genders in five universities in Chile. The questionnaire included the abbreviated Multidimensional Student's Life Satisfaction Scale (MSLSS), Satisfaction with Food-related Life Scale (SWFL) and the Subjective Happiness Scale (SHS). Eating habits, frequency of food consumption in and outside the place of residence, approximate height and weight and sociodemographic variables were measured.\n\n\nRESULTS\nUsing factor analysis, the five-domain structure of the MSLSS was confirmed with 26 of the 30 items of the abbreviated version: Family, Friends, Self, Environment and University. Using cluster analysis four types of students were distinguished that differ significantly in the MSLSS global and domain scores, SWFL and SHS scores, gender, ownership of a food allowance card funded by the Chilean government, importance attributed to food for well-being and socioeconomic status.\n\n\nCONCLUSIONS\nHigher levels of life satisfaction and happiness are associated with greater satisfaction with food-related life. Other major life domains that affect students' subjective well-being are Family, Friends, University and Self. Greater satisfaction in some domains may counterbalance the lower satisfaction in others.",
"title": ""
},
{
"docid": "8b34b86cb1ce892a496740bfbff0f9c5",
"text": "Common subexpression elimination is commonly employed to reduce the number of operations in DSP algorithms after decomposing constant multiplications into shifts and additions. Conventional optimization techniques for finding common subexpressions can optimize constant multiplications with only a single variable at a time, and hence cannot fully optimize the computations with multiple variables found in matrix form of linear systems like DCT, DFT etc. We transform these computations such that all common subexpressions involving any number of variables can be detected. We then present heuristic algorithms to select the best set of common subexpressions. Experimental results show the superiority of our technique over conventional techniques for common subexpression elimination.",
"title": ""
},
{
"docid": "b4e942dc860e127d6370d4425176d62f",
"text": "Several years ago we introduced the Balanced Scorecard (Kaplan and Norton 1992). We began with the premise that an exclusive reliance on financial measures in a management system is insufficient. Financial measures are lag indicators that report on the outcomes from past actions. Exclusive reliance on financial indicators could promote behavior that sacrifices long-term value creation for short-term performance (Porter 1992; AICPA 1994). The Balanced Scorecard approach retains measures of financial performance-the lagging outcome indicators-but supplements these with measures on the drivers, the lead indicators, of future financial performance.",
"title": ""
},
{
"docid": "4294edb250b333a0fe5863860bcb7a8a",
"text": "Present-day malware analysis techniques use both virtualized and emulated environments to analyze malware. The reason is that such environments provide isolation and system restoring capabilities, which facilitate automated analysis of malware samples. However, there exists a class of malware, called VM-aware malware, which is capable of detecting such environments and then hide its malicious behavior to foil the analysis. Because of the artifacts introduced by virtualization or emulation layers, it has always been and will always be possible for malware to detect virtual environments.\n The definitive way to observe the actual behavior of VM-aware malware is to execute them in a system running on real hardware, which is called a \"bare-metal\" system. However, after each analysis, the system must be restored back to the previous clean state. This is because running a malware program can leave the system in an instable/insecure state and/or interfere with the results of a subsequent analysis run. Most of the available state-of-the-art system restore solutions are based on disk restoring and require a system reboot. This results in a significant downtime between each analysis. Because of this limitation, efficient automation of malware analysis in bare-metal systems has been a challenge.\n This paper presents the design, implementation, and evaluation of a malware analysis framework for bare-metal systems that is based on a fast and rebootless system restore technique. Live system restore is accomplished by restoring the entire physical memory of the analysis operating system from another, small operating system that runs outside of the target OS. By using this technique, we were able to perform a rebootless restore of a live Windows system, running on commodity hardware, within four seconds. We also analyzed 42 malware samples from seven different malware families, that are known to be \"silent\" in a virtualized or emulated environments, and all of them showed their true malicious behavior within our bare-metal analysis environment.",
"title": ""
},
{
"docid": "b2132ee641e8b2ae5da9f921e3f0ecd5",
"text": "action into more concrete ones. Each dashed arrow maps a task into a plan of actions. Cambridge University Press 978-1-107-03727-4 — Automated Planning and Acting Malik Ghallab , Dana Nau , Paolo Traverso Excerpt More Information www.cambridge.org © in this web service Cambridge University Press 1.2 Conceptual View of an Actor 7 above it, and decides what activities need to be performed to carry out those tasks. Performing a task may involve reining it into lower-level steps, issuing subtasks to other components below it in the hierarchy, issuing commands to be executed by the platform, and reporting to the component that issued the task. In general, tasks in different parts of the hierarchymay involve concurrent use of different types of models and specialized reasoning functions. This example illustrates two important principles of deliberation: hierarchical organization and continual online processing. Hierarchically organized deliberation. Some of the actions the actor wishes to perform do not map directly into a command executable by its platform. An action may need further reinement and planning. This is done online and may require different representations, tools, and techniques from the ones that generated the task. A hierarchized deliberation process is not intended solely to reduce the search complexity of ofline plan synthesis. It is needed mainly to address the heterogeneous nature of the actions about which the actor is deliberating, and the corresponding heterogeneous representations and models that such deliberations require. Continual online deliberation.Only in exceptional circumstances will the actor do all of its deliberation ofline before executing any of its planned actions. Instead, the actor generally deliberates at runtime about how to carry out the tasks it is currently performing. The deliberation remains partial until the actor reaches its objective, including through lexible modiication of its plans and retrials. The actor’s predictive models are often limited. Its capability to acquire and maintain a broad knowledge about the current state of its environment is very restricted. The cost of minor mistakes and retrials are often lower than the cost of extensive modeling, information gathering, and thorough deliberation. Throughout the acting process, the actor reines and monitors its actions; reacts to events; and extends, updates, and repairs its plan on the basis of its perception focused on the relevant part of the environment. Different parts of the actor’s hierarchy often use different representations of the state of the actor and its environment. These representations may correspond to different amounts of detail in the description of the state and different mathematical constructs. In Figure 1.2, a graph of discrete locations may be used at the upper levels, while the lower levels may use vectors of continuous coniguration variables for the robot limbs. Finally, because complex deliberations can be compiled down by learning into lowlevel commands, the frontier between deliberation functions and the execution platform is not rigid; it evolves with the actor’s experience.",
"title": ""
},
{
"docid": "8e0ec02b22243b4afb04a276712ff6cf",
"text": "1 Morphology with or without Affixes The last few years have seen the emergence of several clearly articulated alternative approaches to morphology. One such approach rests on the notion that only stems of the so-called lexical categories (N, V, A) are morpheme \"pieces\" in the traditional sense—connections between (bundles of) meaning (features) and (bundles of) sound (features). What look like affixes on this view are merely the by-product of morphophonological rules called word formation rules (WFRs) that are sensitive to features associated with the lexical categories, called lexemes. Such an amorphous or affixless theory, adumbrated by Beard (1966) and Aronoff (1976), has been articulated most notably by Anderson (1992) and in major new studies by Aronoff (1992) and Beard (1991). In contrast, Lieber (1992) has refined the traditional notion that affixes as well as lexical stems are \"mor-pheme\" pieces whose lexical entries relate phonological form with meaning and function. For Lieber and other \"lexicalists\" (see, e.g., Jensen 1990), the combining of lexical items creates the words that operate in the syntax. In this paper we describe and defend a third theory of morphology , Distributed Morphology, 1 which combines features of the affixless and the lexicalist alternatives. With Anderson, Beard, and Aronoff, we endorse the separation of the terminal elements involved in the syntax from the phonological realization of these elements. With Lieber and the lexicalists, on the other hand, we take the phonological realization of the terminal elements in the syntax to be governed by lexical (Vocabulary) entries that relate bundles of morphosyntactic features to bundles of pho-nological features. We have called our approach Distributed Morphology (hereafter DM) to highlight the fact that the machinery of what traditionally has been called morphology is not concentrated in a single component of the gram",
"title": ""
},
{
"docid": "209472a5a37a3bb362e43d1b0abb7fd3",
"text": "The goals of the review are threefold: (a) to highlight the educational and employment consequences of poorly developed mathematical competencies; (b) overview the characteristics of children with mathematical learning disability (MLD) and with persistently low achievement (LA) in mathematics; and (c) provide a primer on cognitive science research that is aimed at identifying the cognitive mechanisms underlying these learning disabilities and associated cognitive interventions. Literatures on the educational and economic consequences of poor mathematics achievement were reviewed and integrated with reviews of epidemiological, behavioral genetic, and cognitive science studies of poor mathematics achievement. Poor mathematical competencies are common among adults and result in employment difficulties and difficulties in many common day-to-day activities. Among students, ∼ 7% of children and adolescents have MLD and another 10% show persistent LA in mathematics, despite average abilities in most other areas. Children with MLD and their LA peers have deficits in understanding and representing numerical magnitude, difficulties retrieving basic arithmetic facts from long-term memory, and delays in learning mathematical procedures. These deficits and delays cannot be attributed to intelligence but are related to working memory deficits for children with MLD, but not LA children. These individuals have identifiable number and memory delays and deficits that seem to be specific to mathematics learning. Interventions that target these cognitive deficits are in development and preliminary results are promising.",
"title": ""
},
{
"docid": "d76980f3a0b4e0dab21583b75ee16318",
"text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.",
"title": ""
},
{
"docid": "05b6f7fd65ae6eee7fb3ae44e98fb2f9",
"text": "We explore learning-based approaches for feedback control of a dexterous five-finger hand performing non-prehensile manipulation. First, we learn local controllers that are able to perform the task starting at a predefined initial state. These controllers are constructed using trajectory optimization with respect to locally-linear time-varying models learned directly from sensor data. In some cases, we initialize the optimizer with human demonstrations collected via teleoperation in a virtual environment. We demonstrate that such controllers can perform the task robustly, both in simulation and on the physical platform, for a limited range of initial conditions around the trained starting state. We then consider two interpolation methods for generalizing to a wider range of initial conditions: deep learning, and nearest neighbors. We find that nearest neighbors achieve higher performance under full observability, while a neural network proves advantages under partial observability: it uses only tactile and proprioceptive feedback but no feedback about the object (i.e. it performs the task blind) and learns a time-invariant policy. In contrast, the nearest neighbors method switches between time-varying local controllers based on the proximity of initial object states sensed via motion capture. While both generalization methods leave room for improvement, our work shows that (i) local trajectory-based controllers for complex non-prehensile manipulation tasks can be constructed from surprisingly small amounts of training data, and (ii) collections of such controllers can be interpolated to form more global controllers. Results are summarized in the supplementary video: https://youtu.be/E0wmO6deqjo",
"title": ""
},
{
"docid": "185d1c51d1ebd4428a9754a7c68d82d5",
"text": "Intersex disorders are rare congenital malformations with over 80% being diagnosed with congenital adrenal hyperplasia (CAH). It can be challenging to determine the correct gender at birth and a detailed understanding of the embryology and anatomy is crucial. The birth of a child with intersex is a true emergency situation and an immediate transfer to a medical center familiar with the diagnosis and management of intersex conditions should occur. In children with palpable gonads the presence of a Y chromosome is almost certain, since ovotestes or ovaries usually do not descend. Almost all those patients with male pseudohermaphroditism lack Mullerian structures due to MIS production from the Sertoli cells, but the insufficient testosterone stimulation leads to an inadequate male phenotype. The clinical manifestation of all CAH forms is characterized by the virilization of the outer genitalia. Surgical correction techniques have been developed and can provide satisfactory cosmetic and functional results. The discussion of the management of patients with intersex disorders continues. Current data challenge the past practice of sex reassignment. Further data are necessary to formulate guidelines and recommendations fitting for the individual situation of each patient. Until then the parents have to be supplied with the current data and outcome studies to make the correct choice for their child.",
"title": ""
},
{
"docid": "f3aa019816ae399c3fe834ffce3db53e",
"text": "This paper presents a method to incorporate 3D line segments in vision based SLAM. A landmark initialization method that relies on the Plucker coordinates to represent a 3D line is introduced: a Gaussian sum approximates the feature initial state and is updated as new observations are gathered by the camera. Once initialized, the landmarks state is estimated along an EKF-based SLAM approach: constraints associated with the Plucker representation are considered during the update step of the Kalman filter. The whole SLAM algorithm is validated in simulation runs and results obtained with real data are presented.",
"title": ""
},
{
"docid": "aaabe81401e33f7e2bb48dd6d5970f9b",
"text": "Brain tumor is the most life undermining sickness and its recognition is the most challenging task for radio logistics by manual detection due to varieties in size, shape and location and sort of tumor. So, detection ought to be quick and precise and can be obtained by automated segmentation methods on MR images. In this paper, neutrosophic sets based segmentation is performed to detect the tumor. MRI is an intense apparatus over CT to analyze the interior segments of the body and the tumor. Tumor is detected and true, false and indeterminacy values of tumor are determined by this technique and the proposed method produce the beholden results.",
"title": ""
},
{
"docid": "1e2e099c849b165b31b0c36040825464",
"text": "In recent years, there has been a substantial amount of research on quantum computers – machines that exploit quantum mechanical phenomena to solve mathematical problems that are difficult or intractable for conventional computers. If large-scale quantum computers are ever built, they will be able to break many of the public-key cryptosystems currently in use. This would seriously compromise the confidentiality and integrity of digital communications on the Internet and elsewhere. The goal of post-quantum cryptography (also called quantum-resistant cryptography) is to develop cryptographic systems that are secure against both quantum and classical computers, and can interoperate with existing communications protocols and networks. This Internal Report shares the National Institute of Standards and Technology (NIST)’s current understanding about the status of quantum computing and post-quantum cryptography, and outlines NIST’s initial plan to move forward in this space. The report also recognizes the challenge of moving to new cryptographic infrastructures and therefore emphasizes the need for agencies to focus on crypto agility.",
"title": ""
}
] |
scidocsrr
|
847b4f34e84358574d404ee33878859c
|
Control of cable actuated devices using smooth backlash inverse
|
[
{
"docid": "ff8f72d7afb43513c7a7a6b041a13040",
"text": "The paper first discusses the reasons why simplified solutions for the mechanical structure of fingers in robotic hands should be considered a worthy design goal. After a brief discussion about the mechanical solutions proposed so far for robotic fingers, a different design approach is proposed. It considers finger structures made of rigid links connected by flexural hinges, with joint actuation obtained by means of flexures that can be guided inside each finger according to different patterns. A simplified model of one of these structures is then presented, together with preliminary results of simulation, in order to evaluate the feasibility of the concept. Examples of technological implementation are finally presented and the perspective and problems of application are briefly discussed.",
"title": ""
},
{
"docid": "7252372bdacaa69b93e52a7741c8f4c2",
"text": "This paper introduces a novel type of actuator that is investigated by ESA for force-reflection to a wearable exoskeleton. The actuator consists of a DC motor that is relocated from the joint by means of Bowden cable transmissions. The actuator shall support the development of truly ergonomic and compact wearable man-machine interfaces. Important Bowden cable transmission characteristics are discussed, which dictate a specific hardware design for such an actuator. A first prototype is shown, which was used to analyze these basic characteristics of the transmissions and to proof the overall actuation concept. A second, improved prototype is introduced, which is currently used to investigate the achievable performance as a master actuator in a master-slave control with force-feedback. Initial experimental results are presented, which show good actuator performance in a 4 channel control scheme with a slave joint. The actuator features low movement resistance in free motion and can reflect high torques during hard contact situations. High contact stability can be achieved. The actuator seems therefore well suited to be implemented into the ESA exoskeleton for space-robotic telemanipulation",
"title": ""
}
] |
[
{
"docid": "708fbc1eff4d96da2f3adaa403db3090",
"text": "We propose a new system for generating art. The system generates art by looking at art and learning about style; and becomes creative by increasing the arousal potential of the generated art by deviating from the learned styles. We build over Generative Adversarial Networks (GAN), which have shown the ability to learn to generate novel images simulating a given distribution. We argue that such networks are limited in their ability to generate creative products in their original design. We propose modifications to its objective to make it capable of generating creative art by maximizing deviation from established styles and minimizing deviation from art distribution. We conducted experiments to compare the response of human subjects to the generated art with their response to art created by artists. The results show that human subjects could not distinguish art generated by the proposed system from art generated by contemporary artists and shown in top art",
"title": ""
},
{
"docid": "d74df8673db783ff80d01f2ccc0fe5bf",
"text": "The search for strategies to mitigate undesirable economic, ecological, and social effects of harmful resource consumption has become an important, socially relevant topic. An obvious starting point for businesses that wish to make value creation more sustainable is to increase the utilization rates of existing resources. Modern social Internet technology is an effective means by which to achieve IT-enabled sharing services, which make idle resource capacity owned by one entity accessible to others who need them but do not want to own them. Successful sharing services require synchronized participation of providers and users of resources. The antecedents of the participation behavior of providers and users has not been systematically addressed by the extant literature. This article therefore proposes a model that explains and predicts the participation behavior in sharing services. Our search for a theoretical foundation revealed the Theory of Planned Behavior as most appropriate lens, because this theory enables us to integrate provider behavior and user behavior as constituents of participation behavior. The model is novel for that it is the first attempt to study the interdependencies between the behavior types in sharing service participation and for that it includes both general and specific determinants of the participation behavior.",
"title": ""
},
{
"docid": "941e570d74435332641f9d4f63c403ff",
"text": "Taniguchi defines City Logistics as “the process of totally optimising the logistics and transport activities by private companies in urban areas while considering the traffic environment, traffic congestion and energy consumption within the framework of a market economy”. The distribution of goods based on road services in urban areas contribute to traffic congestion, generates environmental impacts and in some cases incurs in high logistics costs. On the other hand the various stakeholders involved in the applications may have possibly conflicting objectives. Industrial firms, shippers, freight carriers, have individually established to meet consumer demands looking to maximize the company effectiveness and as a consequence from a social point of view the resulting logistics system is inefficient from the point of view of the social costs and environmental impacts. As a consequence the design and evaluation of City Logistics applications requires an integrated framework in which all components could work together. Therefore City Logistics models must be models that, further than including the main components of City Logistics applications, as vehicle routing and fleet management models, should be able of including also the dynamic aspects of the underlying road network, namely if ICT applications are taken into account. Some of the methodological proposals made so far are based on an integration of vehicle routing models and, dynamic traffic simulation models that emulate the actual traffic conditions providing at each time interval the estimates of the current travel times, queues, etc. on each link of the road network, that is, the information that will be used by the logistic model (i.e. a fleet management system identifying in real-time the positions of each vehicle in the fleet and its operational conditions type of load, available capacity, etc. – to determine the optimal dynamic routing and scheduling of the vehicle.",
"title": ""
},
{
"docid": "0c991f86cee8ab7be1719831161a3fec",
"text": "Conversational systems have become increasingly popular as a way for humans to interact with computers. To be able to provide intelligent responses, conversational systems must correctly model the structure and semantics of a conversation. We introduce the task of measuring semantic (in)coherence in a conversation with respect to background knowledge, which relies on the identification of semantic relations between concepts introduced during a conversation. We propose and evaluate graph-based and machine learning-based approaches for measuring semantic coherence using knowledge graphs, their vector space embeddings and word embedding models, as sources of background knowledge. We demonstrate how these approaches are able to uncover different coherence patterns in conversations on the Ubuntu Dialogue Corpus.",
"title": ""
},
{
"docid": "dcc55431a2da871c60abfd53ce270bad",
"text": "Synchrophasor Standards have evolved since the introduction of the first one, IEEE Standard 1344, in 1995. IEEE Standard C37.118-2005 introduced measurement accuracy under steady state conditions as well as interference rejection. In 2009, the IEEE started a joint project with IEC to harmonize real time communications in IEEE Standard C37.118-2005 with the IEC 61850 communication standard. These efforts led to the need to split the C37.118 into 2 different standards: IEEE Standard C37.118.1-2011 that now includes performance of synchrophasors under dynamic systems conditions; and IEEE Standard C37.118.2-2011 Synchrophasor Data Transfer for Power Systems, the object of this paper.",
"title": ""
},
{
"docid": "bb3ba0a17727d2ea4e2aba74f7144da6",
"text": "A roof automobile antenna module for Long Term Evolution (LTE) application is proposed. The module consists of two LTE antennas for the multiple-input multiple-output (MIMO) method which requests low mutual coupling between the antennas for larger capacity. On the other hand, the installation location for a roof-top module is limited from safety or appearance viewpoint and this makes the multiple LTE antennas located there cannot be separated with enough space. In order to retain high isolation between the two antennas in such compact space, the two antennas are designed to have different shapes, different heights and different polarizations, and their ground planes are placed separately. In the proposed module, one antenna is a monopole type and has its element printed on a shark-fin-shaped substrate which is perpendicular to the car-roof. Another one is a planar inverted-F antenna (PIFA) and has its element on a lower plane parallel to the roof. In this manner, the two antennas cover the LTE-bands with omni-directional radiation in the horizontal directions and high radiation gain. The two antennas have reasonably good isolation between them even the module is compact with a dimension of 62×65×73 mm3.",
"title": ""
},
{
"docid": "11eaad434ef87c06562d8cd6baea4207",
"text": "Port address hopping (PAH) communication is a powerful network moving target defense (MTD) mechanism. It was inspired by frequency hopping in wireless communications. One of the critical and difficult issues with PAH is synchronization. Existing schemes usually provide hops for each session lasting only a few seconds/minutes, making them easily influenced by network events such as transmission delays, traffic jams, packet dropouts, reordering, and retransmission. To address these problems, in this paper we propose a novel self-synchronization scheme, called ‘keyed-hashing based self-synchronization (KHSS)’. The proposed method generates the message authentication code (MAC) based on the hash based MAC (HMAC), which is then further used as the synchronization information for port address encoding and decoding. Providing the PAH communication system with one-packet-one-hopping and invisible message authentication abilities enables both clients and servers to constantly change their identities as well as perform message authentication over unreliable communication mediums without synchronization and authentication information transmissions. Theoretical analysis and simulation and experiment results show that the proposed method is effective in defending against man-in-the-middle (MITM) attacks and network scanning. It significantly outperforms existing schemes in terms of both security and hopping efficiency.",
"title": ""
},
{
"docid": "316fa5c677ce5d51a6f31a128b00ebdb",
"text": "Intelligent user interfaces have been proposed as a means to overcome some of the problems that directmanipulation interfaces cannot handle, such as: information overflow problems; providing help on how to use complex systems; or real-time cognitive overload problems. Intelligent user interfaces are also being proposed as a means to make systems individualised or personalised, thereby increasing the systems flexibility and appeal. Unfortunately, there are a number of problems not yet solved that prevent us from creating good intelligent user interface applications: there is a need for methods for how to develop them; there are demands on better usability principles for them; we need a better understanding of the possible ways the interface can utilise intelligence to improve the interaction; and finally, we need to design better tools that will enable an intelligent system to survive the life-cycle of a system (including updates of the database, system support, etc.). We define these problems further and start to outline their solutions.",
"title": ""
},
{
"docid": "abcd64d8aac6d7951fe02d562d5034ed",
"text": "Dialogue continues on the \"readiness\" of new graduates for practice despite significant advancements in the foundational educational preparation for nurses. In this paper, the findings from an exploratory study about the meaning of new graduate \"readiness\" for practice are reported. Data was collected during focus group interviews with one-hundred and fifty nurses and new graduates. Themes were generated using content analysis. Our findings point to agreement about the meaning of new graduate nurses' readiness for practice as having a generalist foundation and some job specific capabilities, providing safe client care, keeping up with the current realities of nursing practice, being well equipped with the tools needed to adapt to the future needs of clients, and possessing a balance of doing, knowing, and thinking. The findings from this exploratory study have implications for policies and programs targeted towards new graduate nurses entering practice.",
"title": ""
},
{
"docid": "07dc406a7ae61845d2a309c5aa07e072",
"text": "The advance of internet technology has stimulated the rise of professional virtual communities (PVCs). The objective of PVCs is to encourage people to exploit or explore knowledge through websites. However, many virtual communities have failed due to the reluctance of members to continue their participation in these PVCs. Motivated by such concerns, this study formulates and tests a theoretical model to explain the factors influencing individuals’ intention to continue participating in PVCs’ knowledge activities. Drawing from the information system and knowledge management literatures, two academic perspectives related to PVC continuance are incorporated in the integrated model. This model posits that an individual’s intention to stay in a professional virtual community is influenced by a contextual factor and technological factors. Specifically, the antecedents of PVC members’ intention to continue sharing knowledge include social interaction ties capital and satisfaction at post-usage stage. These variables, in turn, are adjusted based on the confirmation of pre-usage expectations. A longitudinal study is conducted with 360 members of a professional virtual community. Results indicate that the contextual factor and technological factors both exert significant impacts on PVC participants’ continuance intentions.",
"title": ""
},
{
"docid": "4a26afba58270d7ce1a0eb50bd659eae",
"text": "Recommendation can be reduced to a sub-problem of link prediction, with specific nodes (users and items) and links (similar relations among users/items, and interactions between users and items). However, the previous link prediction algorithms need to be modified to suit the recommendation cases since they do not consider the separation of these two fundamental relations: similar or dissimilar and like or dislike. In this paper, we propose a novel and unified way to solve this problem, which models the relation duality using complex number. Under this representation, the previous works can directly reuse. In experiments with the Movie Lens dataset and the Android software website AppChina.com, the presented approach achieves significant performance improvement comparing with other popular recommendation algorithms both in accuracy and coverage. Besides, our results revealed some new findings. First, it is observed that the performance is improved when the user and item popularities are taken into account. Second, the item popularity plays a more important role than the user popularity does in final recommendation. Since its notable performance, we are working to apply it in a commercial setting, AppChina.com website, for application recommendation.",
"title": ""
},
{
"docid": "37426a6261243f5bbe6d59be3826a82f",
"text": "A key to successful face recognition is accurate and reliable face alignment using automatically-detected facial landmarks. Given this strong dependency between face recognition and facial landmark detection, robust face recognition requires knowledge of when the facial landmark detection algorithm succeeds and when it fails. Facial landmark confidence represents this measure of success. In this paper, we propose two methods to measure landmark detection confidence: local confidence based on local predictors of each facial landmark, and global confidence based on a 3D rendered face model. A score fusion approach is also introduced to integrate these two confidences effectively. We evaluate both confidence metrics on two datasets for face recognition: JANUS CS2 and IJB-A datasets. Our experiments show up to 9% improvements when face recognition algorithm integrates the local-global confidence metrics.",
"title": ""
},
{
"docid": "114381e33d6c08724057e3116952dafc",
"text": "We present the first prototype of the SUMMA Platform: an integrated platform for multilingual media monitoring. The platform contains a rich suite of low-level and high-level natural language processing technologies: automatic speech recognition of broadcast media, machine translation, automated tagging and classification of named entities, semantic parsing to detect relationships between entities, and automatic construction / augmentation of factual knowledge bases. Implemented on the Docker platform, it can easily be deployed, customised, and scaled to large volumes of incoming media streams.",
"title": ""
},
{
"docid": "394c8f7a708d69ca26ab0617ab1530ab",
"text": "Developing wireless sensor networks can enable information gathering, information processing and reliable monitoring of a variety of environments for both civil and military applications. It is however necessary to agree upon a basic architecture for building sensor network applications. This paper presents a general classification of sensor network applications based on their network configurations and discusses some of their architectural requirements. We propose a generic architecture for a specific subclass of sensor applications which we define as self-configurable systems where a large number of sensors coordinate amongst themselves to achieve a large sensing task. Throughout this paper we assume a certain subset of the sensors to be immobile. This paper lists the general architectural and infra-structural components necessary for building this class of sensor applications. Given the various architectural components, we present an algorithm that self-organizes the sensors into a network in a transparent manner. Some of the basic goals of our algorithm include minimizing power utilization, localizing operations and tolerating node and link failures.",
"title": ""
},
{
"docid": "246da8e4f576306eaf94b25786746aa5",
"text": "I am struck by how little is known about so much of cognition. One goal of this poper is to argue for the need to consider a rich set of interlocking issues in the study of cognition. Mainstream work in cognitiorr-including my ow+ignores many critical aspects of animate cognitive systems. Perhaps one reason that existing theories say so little reievant to real world activities is the neglect of social and cultural factors, of emotion, and of the maior points that distinguish an animate cognitive system from an artificial one: the need to survive, to regulate its own operation, to maintain itself, to exist in the environment, to change from a small, uneducated, immature system to an adult, developed, knowledgeable one.",
"title": ""
},
{
"docid": "2abdf71604c7eaa593fa43199817838c",
"text": "We review our work towards achieving competitive performance (classification accuracies) for on-chip machine learning (ML) of large-scale artificial neural networks (ANN) using Non-Volatile Memory (NVM)-based synapses, despite the inherent random and deterministic imperfections of such devices. We then show that such systems could potentially offer faster (up to 25×) and lower-power (from 120-2850×) ML training than GPU-based hardware.",
"title": ""
},
{
"docid": "f741b47d671f4f36aa4b48dc1b112b9a",
"text": "With the development of wireless networks, the scale of network optimization problems is growing correspondingly. While algorithms have been designed to reduce complexity in solving these problems under given size, the approach of directly reducing the size of problem has not received much attention. This motivates us to investigate an innovative approach to reduce problem scale while maintaining the optimality of solution. Through analysis on the optimization solutions, we discover that part of the elements may not be involved in the solution, such as unscheduled links in the flow constrained optimization problem. The observation indicates that it is possible to reduce problem scale without affecting the solution by excluding the unused links from problem formulation. In order to identify the link usage before solving the problem, we exploit deep learning to find the latent relationship between flow information and link usage in optimal solution. Based on this, we further predict whether a link will be scheduled through link evaluation and eliminate unused link from formulation to reduce problem size. Numerical results demonstrate that the proposed method can reduce computation cost by at least 50% without affecting optimality, thus greatly improve the efficiency of solving large scale network optimization problems.",
"title": ""
},
{
"docid": "f1ce3ab900a280ccbf638653ffc19310",
"text": "Executive function (EF) refers to fundamental capacities that underlie more complex cognition and have ecological relevance across the individual's lifespan. However, emerging executive functions have rarely been studied in young preterm children (age 3) whose critical final stages of fetal development are interrupted by their early birth. We administered four novel touch-screen computerized measures of working memory and inhibition to 369 participants born between 2004 and 2006 (52 Extremely Low Birth Weight [ELBW]; 196 late preterm; 121 term-born). ELBW performed worse than term-born on simple and complex working memory and inhibition tasks and had the highest percentage of incomplete performance on a continuous performance test. The latter finding indicates developmental immaturity and the ELBW group's most at-risk preterm status. Additionally, late-preterm participants performed worse compared with term-born on measures of complex working memory but did not differ from those term-born on response inhibition measures. These results are consistent with a recent literature that identifies often subtle but detectable neurocognitive deficits in late-preterm children. Our results support the development and standardization of computerized touch-screen measures to assess EF subcomponent abilities during the formative preschool period. Such measures may be useful to monitor the developmental trajectory of critical executive function abilities in preterm children, and their use is necessary for timely recognition of deficit and application of appropriate interventional strategies.",
"title": ""
},
{
"docid": "7665f2f179d2230abbf33ccf99a7d5b0",
"text": "C R E D IT : J O E S U T L IF F /W W W .C D A D .C O M /J O E S cientifi c publications have at least two goals: (i) to announce a result and (ii) to convince readers that the result is correct. Mathematics papers are expected to contain a proof complete enough to allow knowledgeable readers to fi ll in any details. Papers in experimental science should describe the results and provide a clear enough protocol to allow successful repetition and extension. Over the past ~35 years, computational science has posed challenges to this traditional paradigm—from the publication of the four-color theorem in mathematics ( 1), in which the proof was partially performed by a computer program, to results depending on computer simulation in chemistry, materials science, astrophysics, geophysics, and climate modeling. In these settings, the scientists are often sophisticated, skilled, and innovative programmers who develop large, robust software packages. More recently, scientists who are not themselves computational experts are conducting data analysis with a wide range of modular software tools and packages. Users may often combine these tools in unusual or novel ways. In biology, scientists are now routinely able to acquire and explore data sets far beyond the scope of manual analysis, including billions of DNA bases, millions of genotypes, and hundreds of thousands of RNA measurements. Similar issues may arise in other fi elds, such as astronomy, seismology, and meteorology. While propelling enormous progress, this increasing and sometimes “indirect” use of computation poses new challenges for scientifi c publication and replication. Large data sets are often analyzed many times, with modifi cations to the methods and parameters, and sometimes even updates of the data, until the fi nal results are produced. The resulting publication often gives only scant attention to the computational details. Some have suggested these papers are “merely the advertisement of scholarship whereas the computer programs, input data, parameter values, etc. embody the scholarship itself ” ( 2). However, the actual code or software “mashup” that gave rise to the fi nal analysis may be lost or unrecoverable. For example, colleagues and I published a computational method for distinguishing between two types of acute leukemia, based on large-scale gene expression profi les obtained from DNA microarrays ( 3). This paper generated hundreds of requests from scientists interested in replicating and extending the results. The method involved a complex pipeline of steps, including (i) preprocessing of the data, to eliminate likely artifacts; (ii) selection of genes to be used in the model; (iii) building the actual model and setting the appropriate parameters for it from the training data; (iv) preprocessing independent test data; and fi nally (v) applying the model to test its effi cacy. The result was robust and replicable, and the original data were available online, but there was no standardized form in which to make available the various software components and the precise details of their use.",
"title": ""
},
{
"docid": "7c804a568854a80af9d5c564a270d079",
"text": "Large-scale online ride-sharing platforms have substantially transformed our lives by reallocating transportation resources to alleviate traffic congestion and promote transportation efficiency. An efficient fleet management strategy not only can significantly improve the utilization of transportation resources but also increase the revenue and customer satisfaction. It is a challenging task to design an effective fleet management strategy that can adapt to an environment involving complex dynamics between demand and supply. Existing studies usually work on a simplified problem setting that can hardly capture the complicated stochastic demand-supply variations in high-dimensional space. In this paper we propose to tackle the large-scale fleet management problem using reinforcement learning, and propose a contextual multi-agent reinforcement learning framework including two concrete algorithms, namely contextual deep Q-learning and contextual multi-agent actor-critic, to achieve explicit coordination among a large number of agents adaptive to different contexts. We show significant improvements of the proposed framework over state-of-the-art approaches through extensive empirical studies.",
"title": ""
}
] |
scidocsrr
|
0bae4685e259bf0ab03242f346601e9e
|
An Efficient TVL1 Algorithm for Deblurring Multichannel Images Corrupted by Impulsive Noise
|
[
{
"docid": "00bbfb52c5c54d83ea31fed1ec85b1a2",
"text": "We propose, analyze, and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new half-quadratic model applicable to not only the anisotropic but also the isotropic forms of TV discretizations. The per-iteration computational complexity of the algorithm is three fast Fourier transforms. We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or q-linear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several state-of-the-art algorithms. In particular, it runs orders of magnitude faster than the lagged diffusivity algorithm for TV-based deblurring. Some extensions of our algorithm are also discussed.",
"title": ""
}
] |
[
{
"docid": "3c36004741028267e2c12938f112a584",
"text": "As autonomy becomes prevalent in many applications, ranging from recommendation systems to fully autonomous vehicles, there is an increased need to provide safety guarantees for such systems. The problem is difficult, as these are large, complex systems which operate in uncertain environments, requiring data-driven machine-learning components. However, learning techniques such as Deep Neural Networks, widely used today, are inherently unpredictable and lack the theoretical foundations to provide strong assurance guarantees. We present a compositional approach for the scalable, formal verification of autonomous systems that contain Deep Neural Network components. The approach uses assumeguarantee reasoning whereby contracts, encoding the input-output behavior of individual components, allow the designer to model and incorporate the behavior of the learning-enabled components working side-by-side with the other components. We illustrate the approach on an example taken from the autonomous vehicles domain.",
"title": ""
},
{
"docid": "82f38828416d08bbb6ee195c3ca071eb",
"text": "Real-time ride-sharing applications (e.g., Uber and Lyft) are very popular in recent years. Motivated by the ride-sharing application, we propose a new type of query in road networks, called the optimal multi-meeting-point route (OMMPR) query. Given a road network G, a source nodes, a target node t, and a set of query nodes U, the OMMPR query aims at finding the best route starting from s and ending at t such that the weighted average cost between the cost of the route and the total cost of the shortest paths from every query node to the route is minimized. We show that the problem of computing the OMMPR query is NP-hard. To answer the OMMPR query efficiently, we propose two novel parameterized solutions based on dynamic programming (DP), with the number of query nodes l (i.e., l = |U|) as a parameter, which is typically very small in practice. The two proposed parameterized algorithms run in O(3l · m + 2l · n · (l + log (n))) and O(2l · (m + n · (l + log (n)))) time, respectively, where n and m denote the number of nodes and edges in graph G, thus they are tractable in practice. To reduce the search space of the DP-based algorithms, we propose two novel optimized algorithms based on bidirectional DP and a carefully-designed lower bounding technique. We conduct extensive experimental studies on four large real-world road networks, and the results demonstrate the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "e4d1053a64a09a02f4890af66b28bbba",
"text": "Branchio-oculo-facial syndrome (BOFS) is a rare autosomal dominant condition with variable expressivity, caused by mutations in the TFAP2A gene. We report a three generational family with four affected individuals. The consultand has typical features of BOFS including infra-auricular skin nodules, coloboma, lacrimal duct atresia, cleft lip, conductive hearing loss and typical facial appearance. She also exhibited a rare feature of preaxial polydactyly. Her brother had a lethal phenotype with multiorgan failure. We also report a novel variant in TFAP2A gene. This family highlights the variable severity of BOFS and, therefore, the importance of informed genetic counselling in families with BOFS.",
"title": ""
},
{
"docid": "0f122797e9102c6bab57e64176ee5e84",
"text": "We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"title": ""
},
{
"docid": "f4cb0eb6d39c57779cf9aa7b13abef14",
"text": "Algorithms that learn to generate data whose distributions match that of the training data, such as generative adversarial networks (GANs), have been a focus of much recent work in deep unsupervised learning. Unfortunately, GAN models have drawbacks, such as instable training due to the minmax optimization formulation and the issue of zero gradients. To address these problems, we explore and develop a new family of nonparametric objective functions and corresponding training algorithms to train a DNN generator that learn the probability distribution of the training data. Preliminary results presented in the paper demonstrate that the proposed approach converges faster and the trained models provide very good quality results even with a small number of iterations. Special cases of our formulation yield new algorithms for the Wasserstein and the MMD metrics. We also develop a new algorithm based on the Prokhorov metric between distributions, which we believe can provide promising results on certain kinds of data. We conjecture that the nonparametric approach for training DNNs can provide a viable alternative to the popular GAN formulations.",
"title": ""
},
{
"docid": "059aed9f2250d422d76f3e24fd62bed8",
"text": "Single case studies led to the discovery and phenomenological description of Gelotophobia and its definition as the pathological fear of appearing to social partners as a ridiculous object (Titze 1995, 1996, 1997). The aim of the present study is to empirically examine the core assumptions about the fear of being laughed at in a sample comprising a total of 863 clinical and non-clinical participants. Discriminant function analysis yielded that gelotophobes can be separated from other shame-based neurotics, non-shamebased neurotics, and controls. Separation was best for statements specifically describing the gelotophobic symptomatology and less potent for more general questions describing socially avoidant behaviors. Factor analysis demonstrates that while Gelotophobia is composed of a set of correlated elements in homogenous samples, overall the concept is best conceptualized as unidimensional. Predicted and actual group membership converged well in a cross-classification (approximately 69% of correctly classified cases). Overall, it can be concluded that the fear of being laughed at varies tremendously among adults and might hold a key to understanding certain forms",
"title": ""
},
{
"docid": "293ee45d26440539398188cf086655c1",
"text": "This article reviews recent computer vision techniques used in the assessment of image aesthetic quality. Image aesthetic assessment aims at computationally distinguishing high-quality from low-quality photos based on photographic rules, typically in the form of binary classification or quality scoring. A variety of approaches has been proposed in the literature to try to solve this challenging problem. In this article, we summarize these approaches based on visual feature types (hand-crafted features and deep features) and evaluation criteria (data set characteristics and evaluation metrics). The main contributions and novelties of the reviewed approaches are highlighted and discussed. In addition, following the emergence of deep-learning techniques, we systematically evaluate recent deep-learning settings that are useful for developing a robust deep model for aesthetic scoring.",
"title": ""
},
{
"docid": "72682ac5c2ec0a1ad1f211f3de562062",
"text": "Red blood cell (RBC) aggregation is greatly affected by cell deformability and reduced deformability and increased RBC aggregation are frequently observed in hypertension, diabetes mellitus, and sepsis, thus measurement of both these parameters is essential. In this study, we investigated the effects of cell deformability and fibrinogen concentration on disaggregating shear stress (DSS). The DSS was measured with varying cell deformability and geometry. The deformability of cells was gradually decreased with increasing concentration of glutaraldehyde (0.001~0.005%) or heat treatment at 49.0°C for increasing time intervals (0~7 min), which resulted in a progressive increase in the DSS. However, RBC rigidification by either glutaraldehyde or heat treatment did not cause the same effect on RBC aggregation as deformability did. The effect of cell deformability on DSS was significantly increased with an increase in fibrinogen concentration (2~6 g/L). These results imply that reduced cell deformability and increased fibrinogen levels play a synergistic role in increasing DSS, which could be used as a novel independent hemorheological index to characterize microcirculatory diseases, such as diabetic complications with high sensitivity.",
"title": ""
},
{
"docid": "2526e181083af43aac08a77c67ec402f",
"text": "In its native Europe, the bumblebee, Bombus terrestris (L.) has co-evolved with a large array of parasites whose numbers are negatively linked to the genetic diversity of the colony. In Tasmania B. terrestris was first detected in 1992 and has since spread over much of the state. In order to understand the bee’s invasive success and as part of a wider study into the genetic diversity of bumblebees across Tasmania, we screened bees for co-invasions of ectoparasitic and endoparasitic mites, nematodes and micro-organisms, and searched their nests for brood parasites. The only bee parasite detected was the relatively benign acarid mite Kuzinia laevis (Dujardin) whose numbers per bee did not vary according to region. Nests supported no brood parasites, but did contain the pollen-feeding life stages of K. laevis. Upon summer-autumn collected drones and queens, mites were present on over 80% of bees, averaged ca. 350–400 per bee and were more abundant on younger bees. Nest searching spring queens had similar mite numbers to those collected in summer-autumn but mite numbers dropped significantly once spring queens began foraging for pollen. The average number of mites per queen bee was over 30 fold greater than that reported in Europe. Mite incidence and mite numbers were significantly lower on worker bees than drones or queens, being present on just 51% of bees and averaging 38 mites per bee. Our reported incidence of worker bee parasitism by this mite is 5–50 times higher than reported in Europe. That only one parasite species co-invaded Tasmania supports the notion that a small number of queens founded the Tasmanian population. However, it is clearly evident that both the bee in the absence of parasites, and the mite have been extraordinarily successful invaders.",
"title": ""
},
{
"docid": "1c04afe05954a425209aaf0267236255",
"text": "Twitter is an online social networking service where worldwide users publish their opinions on a variety of topics, discuss current issues, complain, and express positive or negative sentiment for products they use in daily life. Therefore, Twitter is a rich source of data for opinion mining and sentiment analysis. However, sentiment analysis for Twitter messages (tweets) is regarded as a challenging problem because tweets are short and informal. This paper focuses on this problem by the analyzing of symbols called emotion tokens, including emotion symbols (e.g. emoticons and emoji ideograms). According to observation, these emotion tokens are commonly used. They directly express one’s emotions regardless of his/her language, hence they have become a useful signal for sentiment analysis on multilingual tweets. The paper describes the approach to performing sentiment analysis, that is able to determine positive, negative and neutral sentiments for a tested topic.",
"title": ""
},
{
"docid": "329195d467c5084dcfeb5762e885aec2",
"text": "This paper provides an analysis of human mobility data in an urban area using the amount of available bikes in the stations of the community bicycle program Bicing in Barcelona. Based on data sampled from the operator’s website, it is possible to detect temporal and geographic mobility patterns within the city. These patterns are applied to predict the number of available bikes for any station someminutes/hours ahead. The predictions could be used to improve the bicycle programand the information given to the users via the Bicing website. © 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "efb9686dbd690109e8e5341043648424",
"text": "Because of the precise temporal resolution of electrophysiological recordings, the event-related potential (ERP) technique has proven particularly valuable for testing theories of perception and attention. Here, I provide a brief tutorial on the ERP technique for consumers of such research and those considering the use of human electrophysiology in their own work. My discussion begins with the basics regarding what brain activity ERPs measure and why they are well suited to reveal critical aspects of perceptual processing, attentional selection, and cognition, which are unobservable with behavioral methods alone. I then review a number of important methodological issues and often-forgotten facts that should be considered when evaluating or planning ERP experiments.",
"title": ""
},
{
"docid": "36f73143b6f4d80e8f1d77505fabbfcf",
"text": "Progress of IoT and ubiquitous computing technologies has strong anticipation to realize smart services in households such as efficient energy-saving appliance control and elderly monitoring. In order to put those applications into practice, high-accuracy and low-cost in-home living activity recognition is essential. Many researches have tackled living activity recognition so far, but the following problems remain: (i)privacy exposure due to utilization of cameras and microphones; (ii) high deployment and maintenance costs due to many sensors used; (iii) burden to force the user to carry the device and (iv) wire installation to supply power and communication between sensor node and server; (v) few recognizable activities; (vi) low recognition accuracy. In this paper, we propose an in-home living activity recognition method to solve all the problems. To solve the problems (i)--(iv), our method utilizes only energy harvesting PIR and door sensors with a home server for data collection and processing. The energy harvesting sensor has a solar cell to drive the sensor and wireless communication modules. To solve the problems (v) and (vi), we have tackled the following challenges: (a) determining appropriate features for training samples; and (b) determining the best machine learning algorithm to achieve high recognition accuracy; (c) complementing the dead zone of PIR sensor semipermanently. We have conducted experiments with the sensor by five subjects living in a home for 2-3 days each. As a result, the proposed method has achieved F-measure: 62.8% on average.",
"title": ""
},
{
"docid": "be76c7f877ad43668fe411741478c43b",
"text": "With the surging of smartphone sensing, wireless networking, and mobile social networking techniques, Mobile Crowd Sensing and Computing (MCSC) has become a promising paradigm for cross-space and large-scale sensing. MCSC extends the vision of participatory sensing by leveraging both participatory sensory data from mobile devices (offline) and user-contributed data from mobile social networking services (online). Further, it explores the complementary roles and presents the fusion/collaboration of machine and human intelligence in the crowd sensing and computing processes. This article characterizes the unique features and novel application areas of MCSC and proposes a reference framework for building human-in-the-loop MCSC systems. We further clarify the complementary nature of human and machine intelligence and envision the potential of deep-fused human--machine systems. We conclude by discussing the limitations, open issues, and research opportunities of MCSC.",
"title": ""
},
{
"docid": "b0382aa0f8c8171b78dba1c179554450",
"text": "This paper is concerned with the hard thresholding operator which sets all but the k largest absolute elements of a vector to zero. We establish a tight bound to quantitatively characterize the deviation of the thresholded solution from a given signal. Our theoretical result is universal in the sense that it holds for all choices of parameters, and the underlying analysis depends only on fundamental arguments in mathematical optimization. We discuss the implications for two domains: Compressed Sensing. On account of the crucial estimate, we bridge the connection between the restricted isometry property (RIP) and the sparsity parameter for a vast volume of hard thresholding based algorithms, which renders an improvement on the RIP condition especially when the true sparsity is unknown. This suggests that in essence, many more kinds of sensing matrices or fewer measurements are admissible for the data acquisition procedure. Machine Learning. In terms of large-scale machine learning, a significant yet challenging problem is learning accurate sparse models in an efficient manner. In stark contrast to prior work that attempted the `1-relaxation for promoting sparsity, we present a novel stochastic algorithm which performs hard thresholding in each iteration, hence ensuring such parsimonious solutions. Equipped with the developed bound, we prove the global linear convergence for a number of prevalent statistical models under mild assumptions, even though the problem turns out to be non-convex.",
"title": ""
},
{
"docid": "3832812ee527c811a504c10619c59ee3",
"text": "The growing need of the driving public for accurate traffic information has spurred the deployment of large scale dedicated monitoring infrastructure systems, which mainly consist in the use of inductive loop detectors and video cameras. On-board electronic devices have been proposed as an alternative traffic sensing infrastructure, as they usually provide a cost-effective way to collect traffic data, leveraging existing communication infrastructure such as the cellular phone network. A traffic monitoring system based on GPS-enabled smartphones exploits the extensive coverage provided by the cellular network, the high accuracy in position and velocity measurements provided by GPS devices, and the existing infrastructure of the communication network. This article presents a field experiment nicknamed Mobile Century, which was conceived as a proof of concept of such a system. Mobile Century included 100 vehicles carrying a GPS-enabled Nokia N95 phone driving loops on a 10-mile stretch of I-880 near Union City, California, for 8 hours. Data were collected using virtual trip lines, which are geographical markers stored in the handset that probabilistically trigger position and speed updates when the handset crosses them. The proposed prototype system provided sufficient data for traffic monitoring purposes while managing the privacy of participants. The data obtained in the experiment were processed in real-time and successfully broadcast on the internet, demonstrating the feasibility of the proposed system for real-time traffic monitoring. Results suggest that a 2-3% penetration of cell phones in the driver population is enough to provide accurate measurements of the velocity of the traffic flow.",
"title": ""
},
{
"docid": "d566e25ed5ff6e479887a350572cadad",
"text": "Lorentz reciprocity is a fundamental characteristic of the vast majority of electronic and photonic structures. However, non-reciprocal components such as isolators, circulators and gyrators enable new applications ranging from radio frequencies to optical frequencies, including full-duplex wireless communication and on-chip all-optical information processing. Such components today dominantly rely on the phenomenon of Faraday rotation in magneto-optic materials. However, they are typically bulky, expensive and not suitable for insertion in a conventional integrated circuit. Here we demonstrate magnetic-free linear passive non-reciprocity based on the concept of staggered commutation. Commutation is a form of parametric modulation with very high modulation ratio. We observe that staggered commutation enables time-reversal symmetry breaking within very small dimensions (λ/1,250 × λ/1,250 in our device), resulting in a miniature radio-frequency circulator that exhibits reduced implementation complexity, very low loss, strong non-reciprocity, significantly enhanced linearity and real-time reconfigurability, and is integrated in a conventional complementary metal-oxide-semiconductor integrated circuit for the first time.",
"title": ""
},
{
"docid": "ad25cdd1bc4012d6dae8029654c512bd",
"text": "AIM\nThe purpose of this study was to evaluate factors associated with the fill of inter-dental spaces by gingival papillae.\n\n\nMATERIALS AND METHODS\nNinety-six adult subjects were evaluated. Papilla score (PS), tooth form/shape, interproximal contact length and gingival thickness were recorded for 672 maxillary anterior and first pre-molar interproximal sites. Statistical analyses included a non-parametric chi(2) test, anova, the Mixed Procedure for SAS and Pearson's correlation coefficient (r).\n\n\nRESULTS\nPapilla deficiency was more frequent in older subjects (p<0.05), as papilla height decreased 0.012 mm with each year of increasing age (p<0.05). Competent papillae (complete fill inter-dentally) were associated with: (1) crown width: length >or=0.87; (2) proximal contact length >or=2.8 mm; (3) bone crest-contact point <or=5 mm; and (4) interproximal gingival tissue thickness >or=1.5 mm. Gingival thickness correlated negatively with PS (r=-0.37 to -0.54) and positively with tissue height (r=0.23-0.43). Tooth form (i.e. crown width to length ratio) correlated negatively with PS (r=-0.37 to -0.61). Other parameters failed to show any significant effects.\n\n\nCONCLUSIONS\nGingival papilla appearance was associated significantly with subject age, tooth form/shape, proximal contact length, crestal bone height and interproximal gingival thickness.",
"title": ""
},
{
"docid": "b278b9e532600ea1da8c19e07807d899",
"text": "Humans are able to explain their reasoning. On the contrary, deep neural networks are not. This paper attempts to bridge this gap by introducing a new way to design interpretable neural networks for classification, inspired by physiological evidence of the human visual system’s inner-workings. This paper proposes a neural network design paradigm, termed InterpNET, which can be combined with any existing classification architecture to generate natural language explanations of the classifications. The success of the module relies on the assumption that the network’s computation and reasoning is represented in its internal layer activations. While in principle InterpNET could be applied to any existing classification architecture, it is evaluated via an image classification and explanation task. Experiments on a CUB bird classification and explanation dataset show qualitatively and quantitatively that the model is able to generate high-quality explanations. While the current state-of-the-art METEOR score on this dataset is 29.2, InterpNET achieves a much higher METEOR score of 37.9.",
"title": ""
},
{
"docid": "dd9a09431e7816e6774aaf7b2ce33a6f",
"text": "Image based social networks are among the most popular social networking services in recent years. With tremendous images uploaded everyday, understanding users’ preferences to the user-generated images and recommending them to users have become an urgent need. However, this is a challenging task. On one hand, we have to overcome the extremely data sparsity issue in image recommendation. On the other hand, we have to model the complex aspects that influence users’ preferences to these highly subjective content from the heterogeneous data. In this paper, we develop an explainable social contextual image recommendation model to simultaneously explain and predict users’ preferences to images. Specifically, in addition to user interest modeling in the standard recommendation, we identify three key aspects that affect each user’s preference on the social platform, where each aspect summarizes a contextual representation from the complex relationships between users and images. We design a hierarchical attention model in recommendation process given the three contextual aspects. Particularly, the bottom layered attention networks learn to select informative elements of each aspect from heterogeneous data, and the top layered attention network learns to score the aspect importance of the three identified aspects for each user. In this way, we could overcome the data sparsity issue by leveraging the social contextual aspects from heterogeneous data, and explain the underlying reasons for each user’s behavior with the learned hierarchial attention scores. Extensive experimental results on realworld datasets clearly show the superiority of our proposed model.",
"title": ""
}
] |
scidocsrr
|
509d38ceda71f68928cfcc16c6e5e604
|
Protected area needs in a changing climate
|
[
{
"docid": "a28be57b2eb045a525184b67afb14bb2",
"text": "Climate change has already triggered species distribution shifts in many parts of the world. Increasing impacts are expected for the future, yet few studies have aimed for a general understanding of the regional basis for species vulnerability. We projected late 21st century distributions for 1,350 European plants species under seven climate change scenarios. Application of the International Union for Conservation of Nature and Natural Resources Red List criteria to our projections shows that many European plant species could become severely threatened. More than half of the species we studied could be vulnerable or threatened by 2080. Expected species loss and turnover per pixel proved to be highly variable across scenarios (27-42% and 45-63% respectively, averaged over Europe) and across regions (2.5-86% and 17-86%, averaged over scenarios). Modeled species loss and turnover were found to depend strongly on the degree of change in just two climate variables describing temperature and moisture conditions. Despite the coarse scale of the analysis, species from mountains could be seen to be disproportionably sensitive to climate change (approximately 60% species loss). The boreal region was projected to lose few species, although gaining many others from immigration. The greatest changes are expected in the transition between the Mediterranean and Euro-Siberian regions. We found that risks of extinction for European plants may be large, even in moderate scenarios of climate change and despite inter-model variability.",
"title": ""
}
] |
[
{
"docid": "795a4d9f2dc10563dfee28c3b3cd0f08",
"text": "A wide-band probe fed patch antenna with low cross polarization and symmetrical broadside radiation pattern is proposed and studied. By employing a novel meandering probe feed and locating a patch about 0.1/spl lambda//sub 0/ above a ground plane, a patch antenna with 30% impedance bandwidth (SWR<2) and 9 dBi gain is designed. The far field radiation pattern of the antenna is stable across the operating bandwidth. Parametric studies and design guidelines of the proposed feeding structure are provided.",
"title": ""
},
{
"docid": "72c79b86a91f7c8453cd6075314a6b4d",
"text": "This talk aims to introduce LATEX users to XSL-FO. It does not attempt to give an exhaustive view of XSL-FO, but allows a LATEX user to get started. We show the common and different points between these two approaches of word processing.",
"title": ""
},
{
"docid": "888de1004e212e1271758ac35ff9807d",
"text": "We present the design and implementation of iVoLVER, a tool that allows users to create visualizations without textual programming. iVoLVER is designed to enable flexible acquisition of many types of data (text, colors, shapes, quantities, dates) from multiple source types (bitmap charts, webpages, photographs, SVGs, CSV files) and, within the same canvas, supports transformation of that data through simple widgets to construct interactive animated visuals. Aside from the tool, which is web-based and designed for pen and touch, we contribute the design of the interactive visual language and widgets for extraction, transformation, and representation of data. We demonstrate the flexibility and expressive power of the tool through a set of scenarios, and discuss some of the challenges encountered and how the tool fits within the current infovis tool landscape.",
"title": ""
},
{
"docid": "718e31eabfd386768353f9b75d9714eb",
"text": "The mathematical structure of Sudoku puzzles is akin to hard constraint satisfaction problems lying at the basis of many applications, including protein folding and the ground-state problem of glassy spin systems. Via an exact mapping of Sudoku into a deterministic, continuous-time dynamical system, here we show that the difficulty of Sudoku translates into transient chaotic behavior exhibited by this system. We also show that the escape rate κ, an invariant of transient chaos, provides a scalar measure of the puzzle's hardness that correlates well with human difficulty ratings. Accordingly, η = -log₁₀κ can be used to define a \"Richter\"-type scale for puzzle hardness, with easy puzzles having 0 < η ≤ 1, medium ones 1 < η ≤ 2, hard with 2 < η ≤ 3 and ultra-hard with η > 3. To our best knowledge, there are no known puzzles with η > 4.",
"title": ""
},
{
"docid": "b2817d85893a624574381eee4f8648db",
"text": "A coupled-fed antenna design capable of covering eight-band WWAN/LTE operation in a smartphone and suitable to integrate with a USB connector is presented. The antenna comprises an asymmetric T-shaped monopole as a coupling feed and a radiator as well, and a coupled-fed loop strip shorted to the ground plane. The antenna generates a wide lower band to cover (824-960 MHz) for GSM850/900 operation and a very wide upper band of larger than 1 GHz to cover the GPS/GSM1800/1900/UMTS/LTE2300/2500 operation (1565-2690 MHz). The proposed antenna provides wideband operation and exhibits great flexible behavior. The antenna is capable of providing eight-band operation for nine different sizes of PCBs, and enhance impedance matching only by varying a single element length, L. Details of proposed antenna, parameters and performance are presented and discussed in this paper.",
"title": ""
},
{
"docid": "d197875ea8637bf36d2746a2a1861c23",
"text": "There are billions of Internet of things (IoT) devices connecting to the Internet and the number is increasing. As a still ongoing technology, IoT can be used in different fields, such as agriculture, healthcare, manufacturing, energy, retailing and logistics. IoT has been changing our world and the way we live and think. However, IoT has no uniform architecture and there are different kinds of attacks on the different layers of IoT, such as unauthorized access to tags, tag cloning, sybil attack, sinkhole attack, denial of service attack, malicious code injection, and man in middle attack. IoT devices are more vulnerable to attacks because it is simple and some security measures can not be implemented. We analyze the privacy and security challenges in the IoT and survey on the corresponding solutions to enhance the security of IoT architecture and protocol. We should focus more on the security and privacy on IoT and help to promote the development of IoT.",
"title": ""
},
{
"docid": "3d12dea4ae76c5af54578262996fe0bb",
"text": "We introduce a two-layer undirected graphical model, calle d a “Replicated Softmax”, that can be used to model and automatically extract low -dimensional latent semantic representations from a large unstructured collec ti n of documents. We present efficient learning and inference algorithms for thi s model, and show how a Monte-Carlo based method, Annealed Importance Sampling, c an be used to produce an accurate estimate of the log-probability the model a ssigns to test data. This allows us to demonstrate that the proposed model is able to g neralize much better compared to Latent Dirichlet Allocation in terms of b th the log-probability of held-out documents and the retrieval accuracy.",
"title": ""
},
{
"docid": "a58930da8179d71616b8b6ef01ed1569",
"text": "Collecting sensor data results in large temporal data sets which need to be visualized, analyzed, and presented. One-dimensional time-series charts are used, but these present problems when screen resolution is small in comparison to the data. This can result in severe over-plotting, giving rise for the requirement to provide effective rendering and methods to allow interaction with the detailed data. Common solutions can be categorized as multi-scale representations, frequency based, and lens based interaction techniques. In this paper, we comparatively evaluate existing methods, such as Stack Zoom [15] and ChronoLenses [38], giving a graphical overview of each and classifying their ability to explore and interact with data. We propose new visualizations and other extensions to the existing approaches. We undertake and report an empirical study and a field study using these techniques.",
"title": ""
},
{
"docid": "73adcdf18b86ab3598731d75ac655f2c",
"text": "Many individuals exhibit unconscious body movements called mannerisms while speaking. These repeated changes often distract the audience when not relevant to the verbal context. We present an intelligent interface that can automatically extract human gestures using Microsoft Kinect to make speakers aware of their mannerisms. We use a sparsity-based algorithm, Shift Invariant Sparse Coding, to automatically extract the patterns of body movements. These patterns are displayed in an interface with subtle question and answer-based feedback scheme that draws attention to the speaker's body language. Our formal evaluation with 27 participants shows that the users became aware of their body language after using the system. In addition, when independent observers annotated the accuracy of the algorithm for every extracted pattern, we find that the patterns extracted by our algorithm is significantly (p<0.001) more accurate than just random selection. This represents a strong evidence that the algorithm is able to extract human-interpretable body movement patterns. An interactive demo of AutoManner is available at http://tinyurl.com/AutoManner.",
"title": ""
},
{
"docid": "154c40c2fab63ad15ded9b341ff60469",
"text": "ICU mortality risk prediction may help clinicians take effective interventions to improve patient outcome. Existing machine learning approaches often face challenges in integrating a comprehensive panel of physiologic variables and presenting to clinicians interpretable models. We aim to improve both accuracy and interpretability of prediction models by introducing Subgraph Augmented Non-negative Matrix Factorization (SANMF) on ICU physiologic time series. SANMF converts time series into a graph representation and applies frequent subgraph mining to automatically extract temporal trends. We then apply non-negative matrix factorization to group trends in a way that approximates patient pathophysiologic states. Trend groups are then used as features in training a logistic regression model for mortality risk prediction, and are also ranked according to their contribution to mortality risk. We evaluated SANMF against four empirical models on the task of predicting mortality or survival 30 days after discharge from ICU using the observed physiologic measurements between 12 and 24 hours after admission. SANMF outperforms all comparison models, and in particular, demonstrates an improvement in AUC (0.848 vs. 0.827, p<0.002) compared to a state-of-the-art machine learning method that uses manual feature engineering. Feature analysis was performed to illuminate insights and benefits of subgraph groups in mortality risk prediction.",
"title": ""
},
{
"docid": "bfa38fded95303834d487cb27d228ad7",
"text": "Apparel classification encompasses the identification of an outfit in an image. The area has its applications in social media advertising, e-commerce and criminal law. In our work, we introduce a new method for shopping apparels online. This paper describes our approach to classify images using Convolutional Neural Networks. We concentrate mainly on two aspects of apparel classification: (1) Multiclass classification of apparel type and (2) Similar Apparel retrieval based on the query image. This shopping technique relieves the burden of storing a lot of information related to the images and traditional ways of filtering search results can be replaced by image filters",
"title": ""
},
{
"docid": "73bf620a97b2eadeb2398dd718b85fe8",
"text": "The Semeval task 5 was an opportunity for experimenting with the key term extraction module of GROBID, a system for extracting and generating bibliographical information from technical and scientific documents. The tool first uses GROBID’s facilities for analyzing the structure of scientific articles, resulting in a first set of structural features. A second set of features captures content properties based on phraseness, informativeness and keywordness measures. Two knowledge bases, GRISP and Wikipedia, are then exploited for producing a last set of lexical/semantic features. Bagged decision trees appeared to be the most efficient machine learning algorithm for generating a list of ranked key term candidates. Finally a post ranking was realized based on statistics of cousage of keywords in HAL, a large Open Access publication repository.",
"title": ""
},
{
"docid": "80ff93b5f2e0ff3cff04c314e28159fc",
"text": "In the past 30 years there has been a growing body of research using different methods (behavioural, electrophysiological, neuropsychological, TMS and imaging studies) asking whether processing words from different grammatical classes (especially nouns and verbs) engage different neural systems. To date, however, each line of investigation has provided conflicting results. Here we present a review of this literature, showing that once we take into account the confounding in most studies between semantic distinctions (objects vs. actions) and grammatical distinction (nouns vs. verbs), and the conflation between studies concerned with mechanisms of single word processing and those studies concerned with sentence integration, the emerging picture is relatively clear-cut: clear neural separability is observed between the processing of object words (nouns) and action words (typically verbs), grammatical class effects emerge or become stronger for tasks and languages imposing greater processing demands. These findings indicate that grammatical class per se is not an organisational principle of knowledge in the brain; rather, all the findings we review are compatible with two general principles described by typological linguistics as underlying grammatical class membership across languages: semantic/pragmatic, and distributional cues in language that distinguish nouns from verbs. These two general principles are incorporated within an emergentist view which takes these constraints into account.",
"title": ""
},
{
"docid": "f8b0dcd771e7e7cf50a05cf7221f4535",
"text": "Studies on monocyte and macrophage biology and differentiation have revealed the pleiotropic activities of these cells. Macrophages are tissue sentinels that maintain tissue integrity by eliminating/repairing damaged cells and matrices. In this M2-like mode, they can also promote tumor growth. Conversely, M1-like macrophages are key effector cells for the elimination of pathogens, virally infected, and cancer cells. Macrophage differentiation from monocytes occurs in the tissue in concomitance with the acquisition of a functional phenotype that depends on microenvironmental signals, thereby accounting for the many and apparently opposed macrophage functions. Many questions arise. When monocytes differentiate into macrophages in a tissue (concomitantly adopting a specific functional program, M1 or M2), do they all die during the inflammatory reaction, or do some of them survive? Do those that survive become quiescent tissue macrophages, able to react as naïve cells to a new challenge? Or, do monocyte-derived tissue macrophages conserve a \"memory\" of their past inflammatory activation? This review will address some of these important questions under the general framework of the role of monocytes and macrophages in the initiation, development, resolution, and chronicization of inflammation.",
"title": ""
},
{
"docid": "f71b1df36ee89cdb30a1dd29afc532ea",
"text": "Finite state machines are a standard tool to model event-based control logic, and dynamic programming is a staple of optimal decision-making. We combine these approaches in the context of radar resource management for Naval surface warfare. There is a friendly (Blue) force in the open sea, equipped with one multi-function radar and multiple ships. The enemy (Red) force consists of missiles that target the Blue force's radar. The mission of the Blue force is to foil the enemy's threat by careful allocation of radar resources. Dynamically composed finite state machines are used to formalize the model of the battle space and dynamic programming is applied to our dynamic state machine model to generate an optimal policy. To achieve this in near-real-time and a changing environment, we use approximate dynamic programming methods. Example scenario illustrating the model and simulation results are presented.",
"title": ""
},
{
"docid": "8bdd02547be77f4c825c9aed8016ddf8",
"text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.",
"title": ""
},
{
"docid": "232bf10d578c823b0cd98a3641ace44a",
"text": "The effect of economic globalization on the number of transnational terrorist incidents within countries is analyzed statistically, using a sample of 112 countries from 1975 to 1997. Results show that trade, foreign direct investment (FDI), and portfolio investment have no direct positive effect on transnational terrorist incidents within countries and that economic developments of a country and its top trading partners reduce the number of terrorist incidents inside the country. To the extent that trade and FDI promote economic development, they have an indirect negative effect on transnational terrorism.",
"title": ""
},
{
"docid": "66fd7de53986e8c4a7ed08ed88f0b45b",
"text": "BACKGROUND\nConcerns regarding the risk of estrogen replacement have resulted in a significant increase in the use of soy products by menopausal women who, despite the lack of evidence of the efficacy of such products, seek alternatives to menopausal hormone therapy. Our goal was to determine the efficacy of soy isoflavone tablets in preventing bone loss and menopausal symptoms.\n\n\nMETHODS\nThe study design was a single-center, randomized, placebo-controlled, double-blind clinical trial conducted from July 1, 2004, through March 31, 2009. Women aged 45 to 60 years within 5 years of menopause and with a bone mineral density T score of -2.0 or higher in the lumbar spine or total hip were randomly assigned, in equal proportions, to receive daily soy isoflavone tablets, 200 mg, or placebo. The primary outcome was changes in bone mineral density in the lumbar spine, total hip, and femoral neck at the 2-year follow-up. Secondary outcomes included changes in menopausal symptoms, vaginal cytologic characteristics, N -telopeptide of type I bone collagen, lipids, and thyroid function.\n\n\nRESULTS\nAfter 2 years, no significant differences were found between the participants receiving soy tablets (n = 122) and those receiving placebo (n = 126) regarding changes in bone mineral density in the spine (-2.0% and -2.3%, respectively), the total hip (-1.2% and -1.4%, respectively), or the femoral neck (-2.2% and -2.1%, respectively). A significantly larger proportion of participants in the soy group experienced hot flashes and constipation compared with the control group. No significant differences were found between groups in other outcomes.\n\n\nCONCLUSIONS\nIn this population, the daily administration of tablets containing 200 mg of soy isoflavones for 2 years did not prevent bone loss or menopausal symptoms.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00076050.",
"title": ""
},
{
"docid": "a63db4f5e588e23e4832eae581fc1c4b",
"text": "Driver drowsiness is a major cause of mortality in traffic accidents worldwide. Electroencephalographic (EEG) signal, which reflects the brain activities, is more directly related to drowsiness. Thus, many Brain-Machine-Interface (BMI) systems have been proposed to detect driver drowsiness. However, detecting driver drowsiness at its early stage poses a major practical hurdle when using existing BMI systems. This study proposes a context-aware BMI system aimed to detect driver drowsiness at its early stage by enriching the EEG data with the intensity of head-movements. The proposed system is carefully designed for low-power consumption with on-chip feature extraction and low energy Bluetooth connection. Also, the proposed system is implemented using JAVA programming language as a mobile application for on-line analysis. In total, 266 datasets obtained from six subjects who participated in a one-hour monotonous driving simulation experiment were used to evaluate this system. According to a video-based reference, the proposed system obtained an overall detection accuracy of 82.71% for classifying alert and slightly drowsy events by using EEG data alone and 96.24% by using the hybrid data of head-movement and EEG. These results indicate that the combination of EEG data and head-movement contextual information constitutes a robust solution for the early detection of driver drowsiness.",
"title": ""
},
{
"docid": "dba13fea4538f23ea1208087d3e81d6b",
"text": "This paper investigates the effectiveness of using MeSH® in PubMed through its automatic query expansion process: Automatic Term Mapping (ATM). We run Boolean searches based on a collection of 55 topics and about 160,000 MEDLINE® citations used in the 2006 and 2007 TREC Genomics Tracks. For each topic, we first automatically construct a query by selecting keywords from the question. Next, each query is expanded by ATM, which assigns different search tags to terms in the query. Three search tags: [MeSH Terms], [Text Words], and [All Fields] are chosen to be studied after expansion because they all make use of the MeSH field of indexed MEDLINE citations. Furthermore, we characterize the two different mechanisms by which the MeSH field is used. Retrieval results using MeSH after expansion are compared to those solely based on the words in MEDLINE title and abstracts. The aggregate retrieval performance is assessed using both F-measure and mean rank precision. Experimental results suggest that query expansion using MeSH in PubMed can generally improve retrieval performance, but the improvement may not affect end PubMed users in realistic situations.",
"title": ""
}
] |
scidocsrr
|
0f10327bfb8a54d1f87bcbc48c4b3125
|
A semiotic analysis of the genetic information system *
|
[
{
"docid": "0be3178ff2f412952934a49084ee8edc",
"text": "This article introduces the physics of information in the context of molecular biology and genomics. Entropy and information, the two central concepts of Shannon’s theory of information and communication, are often confused with each other but play transparent roles when applied to statistical ensembles (i.e., identically prepared sets) of symbolic sequences. Such an approach can distinguish between entropy and information in genes, predict the secondary structure of ribozymes, and detect the covariation between residues in folded proteins. We also review applications to molecular sequence and structure analysis, and introduce new tools in the characterization of resistance mutations, and in drug design. In a curious twist of history, the dawn of the age of genomics has both seen the rise of the science of bioinformatics as a tool to cope with the enormous amounts of data being generated daily, and the decline of the theory of information as applied to molecular biology. Hailed as a harbinger of a “new movement” (Quastler 1953) along with Cybernetics, the principles of information theory were thought to be applicable to the higher functions of living organisms, and able to analyze such functions as metabolism, growth, and differentiation (Quastler 1953). Today, the metaphors and the jargon of information theory are still widely used (Maynard Smith 1999a, 1999b), as opposed to the mathematical formalism, which is too often considered to be inapplicable to biological information. Clearly, looking back it appears that too much hope was laid upon this theory’s relevance for biology. However, there was well-founded optimism that information theory ought to be able to address the complex issues associated with the storage of information in the genetic code, only to be repeatedly questioned and rebuked (see, e.g., Vincent 1994, Sarkar 1996). In this article, I outline the concepts of entropy and information (as defined by Shannon) in the context of molecular biology. We shall see that not only are these terms well-defined and useful, they also coincide precisely with what we intuitively mean when we speak about information stored in genes, for example. I then present examples of applications of the theory to measuring the information content of biomolecules, the identification of polymorphisms, RNA and protein secondary structure prediction, the prediction and analysis of molecular interactions, and drug design. 1 Entropy and Information Entropy and information are often used in conflicting manners in the literature. A precise understanding, both mathematical and intuitive, of the notion of information (and its relationship to entropy) is crucial for applications in molecular biology. Therefore, let us begin by outlining Shannon’s original entropy concept (Shannon, 1948). 1.1 Shannon’s Uncertainty Measure Entropy in Shannon’s theory (defined mathematically below) is a measure of uncertainty about the identity of objects in an ensemble. Thus, while “en-",
"title": ""
}
] |
[
{
"docid": "d9870dc31895226f60537b3e8591f9fd",
"text": "This paper reports on the design of a low phase noise 76.8 MHz AlN-on-silicon reference oscillator using SiO2 as temperature compensation material. The paper presents profound theoretical optimization of all the important parameters for AlN-on-silicon width extensional mode resonators, filling into the knowledge gap targeting the tens of megahertz frequency range for this type of resonators. Low loading CMOS cross coupled series resonance oscillator is used to reach the-state-of-the-art LTE phase noise specifications. Phase noise of 123 dBc/Hz at 1 kHz, and 162 dBc/Hz at 1 MHz offset is achieved. The oscillator's integrated root mean square RMS jitter is 106 fs (10 kHz to 20 MHz), consuming 850 μA, with startup time of 250 μs, and a figure-of-merit FOM of 216 dB. This work offers a platform for high performance MEMS reference oscillators; where, it shows the applicability of replacing bulky quartz with MEMS resonators in cellular platforms. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "96b2bebeea8fd724609501e753fcf324",
"text": "From failures of intelligence analysis to misguided beliefs about vaccinations, biased judgment and decision making contributes to problems in policy, business, medicine, law, education, and private life. Early attempts to reduce decision biases with training met with little success, leading scientists and policy makers to focus on debiasing by using incentives and changes in the presentation and elicitation of decisions. We report the results of two longitudinal experiments that found medium to large effects of one-shot debiasing training interventions. Participants received a single training intervention, played a computer game or watched an instructional video, which addressed biases critical to intelligence analysis (in Experiment 1: bias blind spot, confirmation bias, and fundamental attribution error; in Experiment 2: anchoring, representativeness, and social projection). Both kinds of interventions produced medium to large debiasing effects immediately (games ≥ −31.94% and videos ≥ −18.60%) that persisted at least 2 months later (games ≥ −23.57% and videos ≥ −19.20%). Games that provided personalized feedback and practice produced larger effects than did videos. Debiasing effects were domain general: bias reduction occurred across problems in different contexts, and problem formats that were taught and not taught in the interventions. The results suggest that a single training intervention can improve decision making. We suggest its use alongside improved incentives, information presentation, and nudges to reduce costly errors associated with biased judgments and decisions.",
"title": ""
},
{
"docid": "1bd75e455b57b14c2a275e50aff0d2db",
"text": "Keratosis pilaris is a common skin disorder comprising less common variants and rare subtypes, including keratosis pilaris rubra, erythromelanosis follicularis faciei et colli, and the spectrum of keratosis pilaris atrophicans. Data, and critical analysis of existing data, are lacking, so the etiologies, pathogeneses, disease associations, and treatments of these clinical entities are poorly understood. The present article aims to fill this knowledge gap by reviewing literature in the PubMed, EMBASE, and CINAHL databases and providing a comprehensive, analytical summary of the clinical characteristics and pathophysiology of keratosis pilaris and its subtypes through the lens of disease associations, genetics, and pharmacologic etiologies. Histopathologic, genomic, and epidemiologic evidence points to keratosis pilaris as a primary disorder of the pilosebaceous unit as a result of inherited mutations or acquired disruptions in various biomolecular pathways. Recent data highlight aberrant Ras signaling as an important contributor to the pathophysiology of keratosis pilaris and its subtypes. We also evaluate data on treatments for keratosis pilaris and its subtypes, including topical, systemic, and energy-based therapies. The effectiveness of various types of lasers in treating keratosis pilaris and its subtypes deserves wider recognition.",
"title": ""
},
{
"docid": "1c8c532c86db01056ffff2aac49fa248",
"text": "In many classification problems, the input is represented as a set of features, e.g., the bag-of-words (BoW) representation of documents. Support vector machines (SVMs) are widely used tools for such classification problems. The performance of the SVMs is generally determined by whether kernel values between data points can be defined properly. However, SVMs for BoW representations have a major weakness in that the co-occurrence of different but semantically similar words cannot be reflected in the kernel calculation. To overcome the weakness, we propose a kernel-based discriminative classifier for BoW data, which we call the latent support measure machine (latent SMM). With the latent SMM, a latent vector is associated with each vocabulary term, and each document is represented as a distribution of the latent vectors for words appearing in the document. To represent the distributions efficiently, we use the kernel embeddings of distributions that hold high order moment information about distributions. Then the latent SMM finds a separating hyperplane that maximizes the margins between distributions of different classes while estimating latent vectors for words to improve the classification performance. In the experiments, we show that the latent SMM achieves state-of-the-art accuracy for BoW text classification, is robust with respect to its own hyper-parameters, and is useful to visualize words.",
"title": ""
},
{
"docid": "33df3da22e9a24767c68e022bb31bbe5",
"text": "The credit card industry has been growing rapidly recently, and thus huge numbers of consumers’ credit data are collected by the credit department of the bank. The credit scoring manager often evaluates the consumer’s credit with intuitive experience. However, with the support of the credit classification model, the manager can accurately evaluate the applicant’s credit score. Support Vector Machine (SVM) classification is currently an active research area and successfully solves classification problems in many domains. This study used three strategies to construct the hybrid SVM-based credit scoring models to evaluate the applicant’s credit score from the applicant’s input features. Two credit datasets in UCI database are selected as the experimental data to demonstrate the accuracy of the SVM classifier. Compared with neural networks, genetic programming, and decision tree classifiers, the SVM classifier achieved an identical classificatory accuracy with relatively few input features. Additionally, combining genetic algorithms with SVM classifier, the proposed hybrid GA-SVM strategy can simultaneously perform feature selection task and model parameters optimization. Experimental results show that SVM is a promising addition to the existing data mining methods. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7a076d150ecc4382c20a6ce08f3a0699",
"text": "Cyber-physical system (CPS) is a new trend in the Internet-of-Things related research works, where physical systems act as the sensors to collect real-world information and communicate them to the computation modules (i.e. cyber layer), which further analyze and notify the findings to the corresponding physical systems through a feedback loop. Contemporary researchers recommend integrating cloud technologies in the CPS cyber layer to ensure the scalability of storage, computation, and cross domain communication capabilities. Though there exist a few descriptive models of the cloud-based CPS architecture, it is important to analytically describe the key CPS properties: computation, control, and communication. In this paper, we present a digital twin architecture reference model for the cloud-based CPS, C2PS, where we analytically describe the key properties of the C2PS. The model helps in identifying various degrees of basic and hybrid computation-interaction modes in this paradigm. We have designed C2PS smart interaction controller using a Bayesian belief network, so that the system dynamically considers current contexts. The composition of fuzzy rule base with the Bayes network further enables the system with reconfiguration capability. We also describe analytically, how C2PS subsystem communications can generate even more complex system-of-systems. Later, we present a telematics-based prototype driving assistance application for the vehicular domain of C2PS, VCPS, to demonstrate the efficacy of the architecture reference model.",
"title": ""
},
{
"docid": "f6c874435978db83361f62bfe70a6681",
"text": "“Microbiology Topics” discusses various topics in microbiology of practical use in validation and compliance. We intend this column to be a useful resource for daily work applications. Reader comments, questions, and suggestions are needed to help us fulfill our objective for this column. Please send your comments and suggestions to column coordinator Scott Sutton at scott. [email protected] or journal managing editor Susan Haigney at [email protected].",
"title": ""
},
{
"docid": "1f3e2c432a5f2f1a6ffcf892c6a06eab",
"text": "In this letter, we study the Ramanujan Sums (RS) transform by means of matrix multiplication. The RS are orthogonal in nature and therefore offer excellent energy conservation capability. The 1-D and 2-D forward RS transforms are easy to calculate, but their inverse transforms are not defined in the literature for non-even function <formula formulatype=\"inline\"><tex Notation=\"TeX\">$ ({\\rm mod}~ {\\rm M}) $</tex></formula>. We solved this problem by using matrix multiplication in this letter.",
"title": ""
},
{
"docid": "6e267600a085f150fa357fb778cbebd8",
"text": "The amount of data available in the internet is increasing at a very high speed. Text summarization has helped in making a better use of the information available online. Various methods were adopted to automate text summarization. However there is no existing system for summarizing Malayalam documents. In this paper we have investigated on developing efficient and effective methods to summarize Malayalam documents. This paper explains a statistical sentence scoring technique and a semantic graph based technique for text summarization.",
"title": ""
},
{
"docid": "a9433e55cd58416bbe2a7b8bd0d78302",
"text": "The southern Alps–Ligurian basin junction is one of the most seismically active zone of the western Europe. A constant microseismicity and moderate size events (3.5 < M < 5) are regularly recorded. The last reported historical event took place in February 1887 and reached an estimated magnitude between 6 and 6.5, causing human losses and extensive damages (intensity X, Medvedev–Sponheuer–Karnik). Such an event, occurring nowadays, could have critical consequences given the high density of population living on the French and Italian Riviera. We study the case of an offshore Mw 6.3 earthquake located at the place where two moderate size events (Mw 4.5) occurred recently and where a morphotectonic feature has been detected by a bathymetric survey. We used a stochastic empiriJ. Salichon · C. Kohrs-Sansorny · F. Courboulex Observatoire de la Côte d’Azur, Géoazur Nice-Sophia Antipolis University, CNRS, Nice, France E. Bertrand LCPC-CETE Méditerranée, Nice, France J. Salichon (B) · E. Bertrand (B) · F. Courboulex (B) Géoazur, 250 rue Albert Einstein, Sophia Antipolis, 06560 Valbonne, France e-mail: [email protected] e-mail: [email protected] e-mail: [email protected] cal Green’s functions (EGFs) summation method to produce a population of realistic accelerograms on rock and soil sites in the city of Nice. The ground motion simulations are calibrated on a rock site with a set of ground motion prediction equations (GMPEs) in order to estimate a reasonable stress-drop ratio between the February 25th, 2001, Mw 4.5, event taken as an EGF and the target earthquake. Our results show that the combination of the GMPEs and EGF techniques is an interesting tool for site-specific strong ground motion estimation.",
"title": ""
},
{
"docid": "513aec195a654a3c89de60ce9e4a52c9",
"text": "Adversarial examples and poisoning attacks have become indisputable threats to the security of modern AI systems based on deep neural networks (DNNs). The Adversarial Robustness Toolbox (ART) is a Python library designed to support researchers and developers in creating novel defence techniques, as well as in deploying practical defences of real-world AI systems. Researchers can use ART to benchmark novel defences against the state-of-the-art. For developers, the library provides interfaces which support the composition of comprehensive defence systems using individual methods as building blocks. The Adversarial Robustness Toolbox supports machine learning models (and deep neural networks (DNNs) specifically) implemented in any of the most popular deep learning frameworks (TensorFlow, Keras, PyTorch and MXNet). Currently, the library is primarily intended to improve the adversarial robustness of visual recognition systems, however, future releases that will comprise adaptations to other data modes (such as speech, text or time series) are envisioned. The ART source code is released (https://github.com/IBM/ adversarial-robustness-toolbox) under an MIT license. The release includes code examples and extensive documentation (http://adversarial-robustness-toolbox.readthedocs.io) to help researchers and developers get quickly started. *[email protected] †[email protected] ‡Contributed to ART while doing an internship with IBM Research – Ireland. 1 ar X iv :1 80 7. 01 06 9v 3 [ cs .L G ] 1 1 Ja n 20 19",
"title": ""
},
{
"docid": "521bab3f363637e0b8d8d8a830816c9b",
"text": "We address the task of Named Entity Disambiguation (NED) for noisy text. We present WikilinksNED, a large-scale NED dataset of text fragments from the web, which is significantly noisier and more challenging than existing newsbased datasets. To capture the limited and noisy local context surrounding each mention, we design a neural model and train it with a novel method for sampling informative negative examples. We also describe a new way of initializing word and entity embeddings that significantly improves performance. Our model significantly outperforms existing state-ofthe-art methods on WikilinksNED while achieving comparable performance on a smaller newswire dataset.",
"title": ""
},
{
"docid": "37ed4c0703266525a7d62ca98dd65e0f",
"text": "Social cognition in humans is distinguished by psychological processes that allow us to make inferences about what is going on inside other people-their intentions, feelings, and thoughts. Some of these processes likely account for aspects of human social behavior that are unique, such as our culture and civilization. Most schemes divide social information processing into those processes that are relatively automatic and driven by the stimuli, versus those that are more deliberative and controlled, and sensitive to context and strategy. These distinctions are reflected in the neural structures that underlie social cognition, where there is a recent wealth of data primarily from functional neuroimaging. Here I provide a broad survey of the key abilities, processes, and ways in which to relate these to data from cognitive neuroscience.",
"title": ""
},
{
"docid": "27a28b74cd2c42c19fcb31c7e3c4ac67",
"text": "The backpropagation of error algorithm (BP) is impossible to implement in a real brain. The recent success of deep networks in machine learning and AI, however, has inspired proposals for understanding how the brain might learn across multiple layers, and hence how it might approximate BP. As of yet, none of these proposals have been rigorously evaluated on tasks where BP-guided deep learning has proved critical, or in architectures more structured than simple fully-connected networks. Here we present results on scaling up biologically motivated models of deep learning on datasets which need deep networks with appropriate architectures to achieve good performance. We present results on the MNIST, CIFAR-10, and ImageNet datasets and explore variants of target-propagation (TP) and feedback alignment (FA) algorithms, and explore performance in both fullyand locally-connected architectures. We also introduce weight-transport-free variants of difference target propagation (DTP) modified to remove backpropagation from the penultimate layer. Many of these algorithms perform well for MNIST, but for CIFAR and ImageNet we find that TP and FA variants perform significantly worse than BP, especially for networks composed of locally connected units, opening questions about whether new architectures and algorithms are required to scale these approaches. Our results and implementation details help establish baselines for biologically motivated deep learning schemes going forward.",
"title": ""
},
{
"docid": "f099eeead6741665f061fcfe736c5c9f",
"text": "For many applications, in particular in natural science, the task is to determine hidden system parameters from a set of measurements. Often, the forward process from parameterto measurement-space is well-defined, whereas the inverse problem is ambiguous: multiple parameter sets can result in the same measurement. To fully characterize this ambiguity, the full posterior parameter distribution, conditioned on an observed measurement, has to be determined. We argue that a particular class of neural networks is well suited for this task – so-called Invertible Neural Networks (INNs). Unlike classical neural networks, which attempt to solve the ambiguous inverse problem directly, INNs focus on learning the forward process, using additional latent output variables to capture the information otherwise lost. Due to invertibility, a model of the corresponding inverse process is learned implicitly. Given a specific measurement and the distribution of the latent variables, the inverse pass of the INN provides the full posterior over parameter space. We prove theoretically and verify experimentally, on artificial data and real-world problems from medicine and astrophysics, that INNs are a powerful analysis tool to find multi-modalities in parameter space, uncover parameter correlations, and identify unrecoverable parameters.",
"title": ""
},
{
"docid": "4f2dfce9c09c62a314143353fb3e3bb5",
"text": "Same-sex marriage, barely on the political radar a decade ago, is a reality in America. How will it affect the well-being of children? Some observers worry that legalizing same-sex marriage would send the message that same-sex parenting and opposite-sex parenting are interchangeable, when in fact they may lead to different outcomes for children. To evaluate that concern, William Meezan and Jonathan Rauch review the growing body of research on how same-sex parenting affects children. After considering the methodological problems inherent in studying small, hard-to-locate populations--problems that have bedeviled this literature-the authors find that the children who have been studied are doing about as well as children normally do. What the research does not yet show is whether the children studied are typical of the general population of children raised by gay and lesbian couples. A second important question is how same-sex marriage might affect children who are already being raised by same-sex couples. Meezan and Rauch observe that marriage confers on children three types of benefits that seem likely to carry over to children in same-sex families. First, marriage may increase children's material well-being through such benefits as family leave from work and spousal health insurance eligibility. It may also help ensure financial continuity, should a spouse die or be disabled. Second, same-sex marriage may benefit children by increasing the durability and stability of their parents' relationship. Finally, marriage may bring increased social acceptance of and support for same-sex families, although those benefits might not materialize in communities that meet same-sex marriage with rejection or hostility. The authors note that the best way to ascertain the costs and benefits of the effects of same-sex marriage on children is to compare it with the alternatives. Massachusetts is marrying same-sex couples, Vermont and Connecticut are offering civil unions, and several states offer partner-benefit programs. Studying the effect of these various forms of unions on children could inform the debate over gay marriage to the benefit of all sides of the argument.",
"title": ""
},
{
"docid": "6b693af5ed67feab686a9a92e4329c94",
"text": "Physicians and nurses express their judgments and observations towards a patient’s health status in clinical narratives. Thus, their judgments are explicitly or implicitly included in patient records. To get impressions on the current health situation of a patient or on changes in the status, analysis and retrieval of this subjective content is crucial. In this paper, we approach this question as sentiment analysis problem and analyze the feasibility of assessing these judgments in clinical text by means of general sentiment analysis methods. Specifically, the word usage in clinical narratives and in a general text corpus is compared. The linguistic characteristics of judgments in clinical narratives are collected. Besides, the requirements for sentiment analysis and retrieval from clinical narratives are derived.",
"title": ""
},
{
"docid": "fcd0c523e74717c572c288a90c588259",
"text": "From analyzing 100 assessments of coping, the authors critiqued strategies and identified best practices for constructing category systems. From current systems, a list of 400 ways of coping was compiled. For constructing lower order categories, the authors concluded that confirmatory factor analysis should replace the 2 most common strategies (exploratory factor analysis and rational sorting). For higher order categories, they recommend that the 3 most common distinctions (problem- vs. emotion-focused, approach vs. avoidance, and cognitive vs. behavioral) no longer be used. Instead, the authors recommend hierarchical systems of action types (e.g., proximity seeking, accommodation). From analysis of 6 such systems, 13 potential core families of coping were identified. Future steps involve deciding how to organize these families, using their functional homogeneity and distinctiveness, and especially their links to adaptive processes.",
"title": ""
},
{
"docid": "c0bf378bd6c763b83249163733c21f07",
"text": "Although videos appear to be very high-dimensional in terms of duration × frame-rate × resolution, temporal smoothness constraints ensure that the intrinsic dimensionality for videos is much lower. In this paper, we use this idea for investigating Domain Adaptation (DA) in videos, an area that remains under-explored. An approach that has worked well for the image DA is based on the subspace modeling of the source and target domains, which works under the assumption that the two domains share a latent subspace where the domain shift can be reduced or eliminated. In this paper, first we extend three subspace based image DA techniques for human action recognition and then combine it with our proposed Eclectic Domain Mixing (EDM) approach to improve the effectiveness of the DA. Further, we use discrepancy measures such as Symmetrized KL Divergence and Target Density Around Source for empirical study of the proposed EDM approach. While, this work mainly focuses on Domain Adaptation in videos, for completeness of the study, we comprehensively evaluate our approach using both object and action datasets. In this paper, we have achieved consistent improvements over chosen baselines and obtained some state-of-the-art results for the datasets.",
"title": ""
}
] |
scidocsrr
|
c1ff619677937073bb73ea57e592d8e5
|
Policy-Based Adaptation of Byzantine Fault Tolerant Systems
|
[
{
"docid": "cbad7caa1cc1362e8cd26034617c39f4",
"text": "Many state-machine Byzantine Fault Tolerant (BFT) protocols have been introduced so far. Each protocol addressed a different subset of conditions and use-cases. However, if the underlying conditions of a service span different subsets, choosing a single protocol will likely not be a best fit. This yields robustness and performance issues which may be even worse in services that exhibit fluctuating conditions and workloads. In this paper, we reconcile existing state-machine BFT protocols in a single adaptive BFT system, called ADAPT, aiming at covering a larger set of conditions and use-cases, probably the union of individual subsets of these protocols. At anytime, a launched protocol in ADAPT can be aborted and replaced by another protocol according to a potential change (an event) in the underlying system conditions. The launched protocol is chosen according to an \"evaluation process\" that takes into consideration both: protocol characteristics and its performance. This is achieved by applying some mathematical formulas that match the profiles of protocols to given user (e.g., service owner) preferences. ADAPT can assess the profiles of protocols (e.g., throughput) at run-time using Machine Learning prediction mechanisms to get accurate evaluations. We compare ADAPT with well known BFT protocols showing that it outperforms others as system conditions change and under dynamic workloads.",
"title": ""
}
] |
[
{
"docid": "fac476744429cacfe1c07ec19ee295eb",
"text": "One effort to protect the network from the threats of hackers, crackers and security experts is to build the Intrusion Detection System (IDS) on the network. The problem arises when new attacks emerge in a relatively fast, so a network administrator must create their own signature and keep updated on new types of attacks that appear. In this paper, it will be made an Intelligence Intrusion Detection System (IIDS) where the Hierarchical Clustering algorithm as an artificial intelligence is used as pattern recognition and implemented on the Snort IDS. Hierarchical clustering applied to the training data to determine the number of desired clusters. Labeling cluster is then performed; there are three labels of cluster, namely Normal, High Risk and Critical. Centroid Linkage Method used for the test data of new attacks. Output system is used to update the Snort rule database. This research is expected to help the Network Administrator to monitor and learn some new types of attacks. From the result, this system is already quite good to recognize certain types of attacks like exploit, buffer overflow, DoS and IP Spoofing. Accuracy performance of this system for the mentioned above type of attacks above is 90%.",
"title": ""
},
{
"docid": "1a38f4218ab54ff22c776eb5572409bf",
"text": "Deep learning has achieved significant improvement in various machine learning tasks including image recognition, speech recognition, machine translation and etc. Inspired by the huge success of the paradigm, there have been lots of tries to apply deep learning algorithms to data analytics problems with big data including traffic flow prediction. However, there has been no attempt to apply the deep learning algorithms to the analysis of air traffic data. This paper investigates the effectiveness of the deep learning models in the air traffic delay prediction tasks. By combining multiple models based on the deep learning paradigm, an accurate and robust prediction model has been built which enables an elaborate analysis of the patterns in air traffic delays. In particular, Recurrent Neural Networks (RNN) has shown its great accuracy in modeling sequential data. Day-to-day sequences of the departure and arrival flight delays of an individual airport have been modeled by the Long Short-Term Memory RNN architecture. It has been shown that the accuracy of RNN improves with deeper architectures. In this study, four different ways of building deep RNN architecture are also discussed. Finally, the accuracy of the proposed prediction model was measured, analyzed and compared with previous prediction methods. It shows best accuracy compared with all other methods.",
"title": ""
},
{
"docid": "4c1060bf3e7d01f817e6ce84d1d6fac0",
"text": "1668 The smaller the volume (or share) of imports from the trading partner, the larger the impact of a preferential trade agreement on home country welfare—because the smaller the imports, the smaller the loss in tariff revenue. And the home country is better off as a small member of a large bloc than as a large member of a small bloc. Summary findings There has been a resurgence of preferential trade agreements (PTAs) partly because of the deeper European integration known as EC-92, which led to a fear of a Fortress Europe; and partly because of the U.S. decision to form a PTA with Canada. As a result, there has been a domino effect: a proliferation of PTAs, which has led to renewed debate about how PTAs affect both welfare and the multilateral system. Schiff examines two issues: the welfare impact of preferential trade agreements (PTAs) and the effect of structural and policy changes on PTAs. He asks how the PTA's effect on home-country welfare is affected by higher demand for imports; the efficiency of production of the partner or rest of the world (ROW); the share imported from the partner (ROW); and the initial protection on imports from the partner (ROW). Among his findings: • An individual country benefits more from a PTA if it imports less from its partner countries (with imports measured either in volume or as a share of total imports). This result has important implications for choice of partners. • A small home country loses from forming a free trade agreement (FTA) with a small partner country but gains from forming one with the rest of the world. In other words, the home country is better off as a small member of a large bloc than as a large member of a small bloc. This result need not hold if smuggling is a factor. • Home country welfare after formation of a FTA is higher when imports from the partner country are smaller, whether the partner country is large or small. Welfare worsens as imports from the partner country increase. • In general, a PTA is more beneficial (or less harmful) for a country with lower import demand. A PTA is also more beneficial for a country with a more efficient import-substituting sector, as this will result in a lower demand for imports. • A small country may gain from forming a PTA when smuggling …",
"title": ""
},
{
"docid": "209b304009db4a04400da178d19fe63e",
"text": "Mecanum wheels give vehicles and robots autonomous omni-directional capabilities, while regular wheels don’t. The omni-directionality that such wheels provide makes the vehicle extremely maneuverable, which could be very helpful in different indoor and outdoor applications. However, current Mecanum wheel designs can only operate on flat hard surfaces, and perform very poorly on rough terrains. This paper presents two modified Mecanum wheel designs targeted for complex rough terrains and discusses their advantages and disadvantages in comparison to regular Mecanum wheels. The wheels proposed here are particularly advantageous for overcoming obstacles up to 75% of the overall wheel diameter in lateral motion which significantly facilitates the lateral motion of vehicles on hard rough surfaces and soft soils such as sand which cannot be achieved using other types of wheels. The paper also presents control aspects that need to be considered when controlling autonomous vehicles/robots using the proposed wheels.",
"title": ""
},
{
"docid": "148d0709c58111c2f703f68d348c09af",
"text": "There has been tremendous growth in the use of mobile devices over the last few years. This growth has fueled the development of millions of software applications for these mobile devices often called as 'apps'. Current estimates indicate that there are hundreds of thousands of mobile app developers. As a result, in recent years, there has been an increasing amount of software engineering research conducted on mobile apps to help such mobile app developers. In this paper, we discuss current and future research trends within the framework of the various stages in the software development life-cycle: requirements (including non-functional), design and development, testing, and maintenance. While there are several non-functional requirements, we focus on the topics of energy and security in our paper, since mobile apps are not necessarily built by large companies that can afford to get experts for solving these two topics. For the same reason we also discuss the monetizing aspects of a mobile app at the end of the paper. For each topic of interest, we first present the recent advances done in these stages and then we present the challenges present in current work, followed by the future opportunities and the risks present in pursuing such research.",
"title": ""
},
{
"docid": "44bffd6caa0d90798f8ebc21a10fd248",
"text": "INTRODUCTION\nThis study describes quality indicators for the pre-analytical process, grouping errors according to patient risk as critical or major, and assesses their evaluation over a five-year period.\n\n\nMATERIALS AND METHODS\nA descriptive study was made of the temporal evolution of quality indicators, with a study population of 751,441 analytical requests made during the period 2007-2011. The Runs Test for randomness was calculated to assess changes in the trend of the series, and the degree of control over the process was estimated by the Six Sigma scale.\n\n\nRESULTS\nThe overall rate of critical pre-analytical errors was 0.047%, with a Six Sigma value of 4.9. The total rate of sampling errors in the study period was 13.54% (P = 0.003). The highest rates were found for the indicators \"haemolysed sample\" (8.76%), \"urine sample not submitted\" (1.66%) and \"clotted sample\" (1.41%), with Six Sigma values of 3.7, 3.7 and 2.9, respectively.\n\n\nCONCLUSION\nThe magnitude of pre-analytical errors was accurately valued. While processes that triggered critical errors are well controlled, the results obtained for those regarding specimen collection are borderline unacceptable; this is particularly so for the indicator \"haemolysed sample\".",
"title": ""
},
{
"docid": "86cdce8b04818cc07e1003d85305bd40",
"text": "Balanced graph partitioning is a well known NP-complete problem with a wide range of applications. These applications include many large-scale distributed problems including the optimal storage of large sets of graph-structured data over several hosts-a key problem in today's Cloud infrastructure. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because they typically involve frequent global operations over the entire graph. In this paper, we propose a fully distributed algorithm, called JA-BE-JA, that uses local search and simulated annealing techniques for graph partitioning. The algorithm is massively parallel: there is no central coordination, each node is processed independently, and only the direct neighbors of the node, and a small subset of random nodes in the graph need to be known locally. Strict synchronization is not required. These features allow JA-BE-JA to be easily adapted to any distributed graph-processing system from data centers to fully distributed networks. We perform a thorough experimental analysis, which shows that the minimal edge-cut value achieved by JA-BE-JA is comparable to state-of-the-art centralized algorithms such as METIS. In particular, on large social networks JA-BEJA outperforms METIS, which makes JA-BE-JA-a bottom-up, self-organizing algorithm-a highly competitive practical solution for graph partitioning.",
"title": ""
},
{
"docid": "82159d19fc5a5ac7242c7e60d75e1f09",
"text": "in the domain of computer technologies, our understanding of knowledge is somewhat more elusive. Data and information can be exactly the same thing, since their distinction lies in the context to which they are applied. One person's data could be another person's information. Although data often refers to the raw codification of facts, usually useful to few, it could be information for someone who could apply it to a decision or problem context. Typically, data is classified, summarized, transferred, or corrected to add value, and becomes information within a certain context. This conversion is relatively mechanical, and it has long been facilitated by storage, processing, and communication technologies. These technologies add place, time, and form utility to the data. In doing so, the information serves to \"inform\" or reduce uncertainty within the problem domain. Therefore, information is dyadic within the attendant condition, i.e., it has only utility within the context. Data, on the other hand, is not dyadic within the context. Independent of context, data and information could be identical.",
"title": ""
},
{
"docid": "dd51e9bed7bbd681657e8742bb5bf280",
"text": "Automated negotiation systems with self interested agents are becoming increas ingly important One reason for this is the technology push of a growing standardized communication infrastructure Internet WWW NII EDI KQML FIPA Concor dia Voyager Odyssey Telescript Java etc over which separately designed agents belonging to di erent organizations can interact in an open environment in real time and safely carry out transactions The second reason is strong application pull for computer support for negotiation at the operative decision making level For example we are witnessing the advent of small transaction electronic commerce on the Internet for purchasing goods information and communication bandwidth There is also an industrial trend toward virtual enterprises dynamic alliances of small agile enterprises which together can take advantage of economies of scale when available e g respond to more diverse orders than individual agents can but do not su er from diseconomies of scale Multiagent technology facilitates such negotiation at the operative decision mak ing level This automation can save labor time of human negotiators but in addi tion other savings are possible because computational agents can be more e ective at nding bene cial short term contracts than humans are in strategically and com binatorially complex settings This chapter discusses multiagent negotiation in situations where agents may have di erent goals and each agent is trying to maximize its own good without concern for the global good Such self interest naturally prevails in negotiations among independent businesses or individuals In building computer support for negotiation in such settings the issue of self interest has to be dealt with In cooperative distributed problem solving the system designer imposes an interaction protocol and a strategy a mapping from state history to action a",
"title": ""
},
{
"docid": "e294a94b03a2bd958def360a7bce2a46",
"text": "The seismic loss estimation is greatly influenced by the identification of the failure mechanism and distribution of the structures. In case of infilled structures, the final failure mechanism greatly differs to that expected during the design and the analysis stages. This is mainly due to the resultant composite behaviour of the frame and the infill panel, which makes the failure assessment and consequently the loss estimation a challenge. In this study, a numerical investigation has been conducted on the influence of masonry infilled panels on physical structural damages and the associated economic losses, under seismic excitation. The selected index buildings have been simulated following real case typical mid-rise masonry infilled steel frame structures. A realistic simulation of construction details, such as variation of infill material properties, type of connections and built quality have been implemented in the models. The fragility functions have been derived for each model using the outcomes obtained from incremental dynamic analysis (IDA). Moreover, by considering different cases of building distribution, the losses have been estimated following an intensity-based assessment approach. The results indicate that the presence of infill panel have a noticeable influence on the vulnerability of the structure and should not be ignored in loss estimations.",
"title": ""
},
{
"docid": "8961d0bd4ba45849bd8fa5c53c0cfb1d",
"text": "SUMMARY\nThe program MODELTEST uses log likelihood scores to establish the model of DNA evolution that best fits the data.\n\n\nAVAILABILITY\nThe MODELTEST package, including the source code and some documentation is available at http://bioag.byu. edu/zoology/crandall_lab/modeltest.html.",
"title": ""
},
{
"docid": "509f840b001b01825425db6209cb7c82",
"text": "A system of rigid bodies with multiple simultaneous unilateral contacts is considered in this paper. The problem is to predict the velocities of the bodies and the frictional forces acting on the simultaneous multicontacts. This paper presents a numerical method based on an extension of an explicit time-stepping scheme and an application of the differential inclusion process introduced by J. J. Moreau. From a differential kinematic analysis of contacts, we derive a set of transfer equations in the velocity-based time-stepping formulation. In applying the Gauss-Seidel iterative scheme, the transfer equations are combined with the Signorini conditions and Coulomb's friction law. The contact forces are properly resolved in each iteration, without resorting to any linearization of the friction cone. The proposed numerical method is illustrated with examples, and its performance is compared with an acceleration-based scheme using linear complementary techniques. Multibody contact systems are broadly involved in many engineering applications. The motivation of this is to solve for the contact forces and body motion for planning the fixture-inserting operation. However, the results of the paper can be generally used in problems involving multibody contacts, such as robotic manipulation, mobile robots, computer graphics and simulation, etc. The paper presents a numerical method based on an extension of an explicit time-stepping scheme, and an application of the differential inclusion process introduced by J. J. Moreau, and compares the numerical results with an acceleration-based scheme with linear complementary techniques. We first describe the mathematical model of contact kinematics of smooth rigid bodies. Then, we present the Gauss-Seidel iterative method for resolving the multiple simultaneous contacts within the time-stepping framework. Finally, numerical examples are given and compared with the previous results of a different approach, which shows that the simulation results of these two methods agree well, and it is also generally more efficient, as it is an explicit method. This paper focuses on the description of the proposed time-stepping and Gauss-Seidel iterations and their numerical implementation, and several theoretical issues are yet to be resolved, like the convergence and uniqueness of the Gauss-Seidel iteration, and the existence and uniqueness of a positive k in solving frictional forces. However, our limited numerical experience has indicated positive answers to these questions. We have always found a single positive root of k and a convergent solution in the Gauss-Seidel iteration for all of our examples.",
"title": ""
},
{
"docid": "f784ffcdb63558f5f22fe90058853904",
"text": "Stylometric analysis of prose is typically limited to classification tasks such as authorship attribution. Since the models used are typically black boxes, they give little insight into the stylistic differences they detect. In this paper, we characterize two prose genres syntactically: chick lit (humorous novels on the challenges of being a modern-day urban female) and high literature. First, we develop a top-down computational method based on existing literary-linguistic theory. Using an off-the-shelf parser we obtain syntactic structures for a Dutch corpus of novels and measure the distribution of sentence types in chick-lit and literary novels. The results show that literature contains more complex (subordinating) sentences than chick lit. Secondly, a bottom-up analysis is made of specific morphological and syntactic features in both genres, based on the parser’s output. This shows that the two genres can be distinguished along certain features. Our results indicate that detailed insight into stylistic differences can be obtained by combining computational linguistic analysis with literary theory.",
"title": ""
},
{
"docid": "4a4a11d2779eab866ff32c564e54b69d",
"text": "Although backpropagation neural networks generally predict better than decision trees do for pattern classiication problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, more often than not, explicit knowledge is needed by human experts. This work drives a symbolic representation for neural networks to make explicit each prediction of a neural network. An algorithm is proposed and implemented to extract symbolic rules from neural networks. Explicitness of the extracted rules is supported by comparing the symbolic rules generated by decision trees methods. Empirical study demonstrates that the proposed algorithm generates high quality rules from neural networks comparable with those of decision trees in terms of predictive accuracy, number of rules and average number of conditions for a rule. The symbolic rules from nerual networks preserve high predictive accuracy of original networks. An early and shorter version of this paper has been accepted for presentation at IJCAI'95.",
"title": ""
},
{
"docid": "bfc12c790b5195861ba74f024d7cc9b5",
"text": "Research in emotion regulation has largely focused on how people manage their own emotions, but there is a growing recognition that the ways in which we regulate the emotions of others also are important. Drawing on work from diverse disciplines, we propose an integrative model of the psychological and neural processes supporting the social regulation of emotion. This organizing framework, the 'social regulatory cycle', specifies at multiple levels of description the act of regulating another person's emotions as well as the experience of being a target of regulation. The cycle describes the processing stages that lead regulators to attempt to change the emotions of a target person, the impact of regulation on the processes that generate emotions in the target, and the underlying neural systems.",
"title": ""
},
{
"docid": "95d767d1b9a2ba2aecdf26443b3dd4af",
"text": "Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.",
"title": ""
},
{
"docid": "971147ec0ca3210b834da65e563120d0",
"text": "The production of adenosine represents a critical endogenous mechanism for regulating immune and inflammatory responses during conditions of stress, injury, or infection. Adenosine exerts predominantly protective effects through activation of four 7-transmembrane receptor subtypes termed A1, A2A, A2B, and A3, of which the A2A adenosine receptor (A2AAR) is recognised as a major mediator of anti-inflammatory responses. The A2AAR is widely expressed on cells of the immune system and numerous in vitro studies have identified its role in suppressing key stages of the inflammatory process, including leukocyte recruitment, phagocytosis, cytokine production, and immune cell proliferation. The majority of actions produced by A2AAR activation appear to be mediated by cAMP, but downstream events have not yet been well characterised. In this article, we review the current evidence for the anti-inflammatory effects of the A2AAR in different cell types and discuss possible molecular mechanisms mediating these effects, including the potential for generalised suppression of inflammatory gene expression through inhibition of the NF-kB and JAK/STAT proinflammatory signalling pathways. We also evaluate findings from in vivo studies investigating the role of the A2AAR in different tissues in animal models of inflammatory disease and briefly discuss the potential for development of selective A2AAR agonists for use in the clinic to treat specific inflammatory conditions.",
"title": ""
},
{
"docid": "bdc82fead985055041171d63415f9dde",
"text": "We introduce a new corpus of sentence-level agreement and disagreement annotations over LiveJournal and Wikipedia threads. This is the first agreement corpus to offer full-document annotations for threaded discussions. We provide a methodology for coding responses as well as an implemented tool with an interface that facilitates annotation of a specific response while viewing the full context of the thread. Both the results of an annotator questionnaire and high inter-annotator agreement statistics indicate that the annotations collected are of high quality.",
"title": ""
},
{
"docid": "0b9b85dc4f80e087f591f89b12bb6146",
"text": "Entity profiling (EP) as an important task of Web mining and information extraction (IE) is the process of extracting entities in question and their related information from given text resources. From computational viewpoint, the Farsi language is one of the less-studied and less-resourced languages, and suffers from the lack of high quality language processing tools. This problem emphasizes the necessity of developing Farsi text processing systems. As an element of EP research, we present a semantic approach to extract profile of person entities from Farsi Web documents. Our approach includes three major components: (i) pre-processing, (ii) semantic analysis and (iii) attribute extraction. First, our system takes as input the raw text, and annotates the text using existing pre-processing tools. In semantic analysis stage, we analyze the pre-processed text syntactically and semantically and enrich the local processed information with semantic information obtained from a distant knowledge base. We then use a semantic rule-based approach to extract the related information of the persons in question. We show the effectiveness of our approach by testing it on a small Farsi corpus. The experimental results are encouraging and show that the proposed method outperforms baseline methods.",
"title": ""
}
] |
scidocsrr
|
a7ecc679e00a090a141312f80c738635
|
PowerSpy: Location Tracking using Mobile Device Power Analysis
|
[
{
"docid": "5e286453dfe55de305b045eaebd5f8fd",
"text": "Target tracking is an important element of surveillance, guidance or obstacle avoidance, whose role is to determine the number, position and movement of targets. The fundamental building block of a tracking system is a filter for recursive state estimation. The Kalman filter has been flogged to death as the work-horse of tracking systems since its formulation in the 60's. In this talk we look beyond the Kalman filter at sequential Monte Carlo methods, collectively referred to as particle filters. Particle filters have become a popular method for stochastic dynamic estimation problems. This popularity can be explained by a wave of optimism among practitioners that traditionally difficult nonlinear/non-Gaussian dynamic estimation problems can now be solved accurately and reliably using this methodology. The computational cost of particle filters have often been considered their main disadvantage, but with ever faster computers and more efficient particle filter algorithms, this argument is becoming less relevant. The talk is organized in two parts. First we review the historical development and current status of particle filtering and its relevance to target tracking. We then consider in detail several tracking applications where conventional (Kalman based) methods appear inappropriate (unreliable or inaccurate) and where we instead need the potential benefits of particle filters. 1 The paper was written together with David Salmond, QinetiQ, UK.",
"title": ""
},
{
"docid": "74227709f4832c3978a21abb9449203b",
"text": "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor.",
"title": ""
}
] |
[
{
"docid": "64a730ce8aad5d4679409be43a291da7",
"text": "Background In the last years, it has been seen a shifting on society's consumption patterns, from mass consumption to second-hand culture. Moreover, consumer's perception towards second-hand stores, has been changing throughout the history of second-hand markets, according to the society's values prevailing in each time. Thus, the purchase intentions regarding second-hand clothes are influence by motivational and moderating factors according to the consumer's perception. Therefore, it was employed the theory of Guiot and Roux (2010) on motivational factors towards second-hand shopping and previous researches on moderating factors towards second-hand shopping. Purpose The purpose of this study is to explore consumer's perception and their purchase intentions towards second-hand clothing stores. Method For this, a qualitative and abductive approach was employed, combined with an exploratory design. Semi-structured face-to-face interviews were conducted utilizing a convenience sampling approach. Conclusion The findings show that consumers perception and their purchase intentions are influenced by their age and the environment where they live. However, the environment affect people in different ways. From this study, it could be found that elderly consumers are influenced by values and beliefs towards second-hand clothes. Young people are very influenced by the concept of fashion when it comes to second-hand clothes. For adults, it could be observed that price and the sense of uniqueness driver their decisions towards second-hand clothes consumption. The main motivational factor towards second-hand shopping was price. On the other hand, risk of contamination was pointed as the main moderating factor towards second-hand purchase. The study also revealed two new motivational factors towards second-hand clothing shopping, such charity and curiosity. Managers of second-hand clothing stores can make use of these findings to guide their decisions, especially related to improvements that could be done in order to make consumers overcoming the moderating factors towards second-hand shopping. The findings of this study are especially useful for second-hand clothing stores in Borås, since it was suggested couple of improvements for those stores based on the participant's opinions.",
"title": ""
},
{
"docid": "7ddc7a3fffc582f7eee1d0c29914ba1a",
"text": "Cyclic neutropenia is an uncommon hematologic disorder characterized by a marked decrease in the number of neutrophils in the peripheral blood occurring at regular intervals. The neutropenic phase is characteristically associated with clinical symptoms such as recurrent fever, malaise, headaches, anorexia, pharyngitis, ulcers of the oral mucous membrane, and gingival inflammation. This case report describes a Japanese girl who has this disease and suffers from periodontitis and oral ulceration. Her case has been followed up for the past 5 years from age 7 to 12. The importance of regular oral hygiene, careful removal of subgingival plaque and calculus, and periodic and thorough professional mechanical tooth cleaning was emphasized to arrest the progress of periodontal breakdown. Local antibiotic application with minocycline ointment in periodontal pockets was beneficial as an ancillary treatment, especially during neutropenic periods.",
"title": ""
},
{
"docid": "75060c7027db4e75bc42f3f3c84cad9b",
"text": "In this paper, we investigate whether superior performance on corporate social responsibility (CSR) strategies leads to better access to finance. We hypothesize that better access to finance can be attributed to a) reduced agency costs due to enhanced stakeholder engagement and b) reduced informational asymmetry due to increased transparency. Using a large cross-section of firms, we find that firms with better CSR performance face significantly lower capital constraints. Moreover, we provide evidence that both of the hypothesized mechanisms, better stakeholder engagement and transparency around CSR performance, are important in reducing capital constraints. The results are further confirmed using several alternative measures of capital constraints, a paired analysis based on a ratings shock to CSR performance, an instrumental variables and also a simultaneous equations approach. Finally, we show that the relation is driven by both the social and the environmental dimension of CSR.",
"title": ""
},
{
"docid": "66382b88e0faa573251d5039ccd65d6c",
"text": "In this communication, we present a new circularly-polarized array antenna using 2×2 linearly-polarized sub grid arrays in a low temperature co-fired ceramic technology for highly-integrated 60-GHz radio. The sub grid arrays are sequentially rotated and excited with a 90°-phase increment to radiate circularly-polarized waves. The feeding network of the array antenna is based on stripline quarter-wave matched T-junctions. The array antenna has a size of 15×15×0.9 mm3. Simulated and measured results confirm wide impedance, axial ratio, pattern, and gain bandwidths.",
"title": ""
},
{
"docid": "6766977de80074325165a82eeb08d671",
"text": "We synthesized the literature on gamification of education by conducting a review of the literature on gamification in the educational and learning context. Based on our review, we identified several game design elements that are used in education. These game design elements include points, levels/stages, badges, leaderboards, prizes, progress bars, storyline, and feedback. We provided examples from the literature to illustrate the application of gamification in the educational context.",
"title": ""
},
{
"docid": "f83a16d393c78d6ba0e65a4659446e7e",
"text": "Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https://bitbucket.org/columbiadvmm/cdc.",
"title": ""
},
{
"docid": "b8def7be21f014693589ae99385412dd",
"text": "Automatic image captioning has received increasing attention in recent years. Although there are many English datasets developed for this problem, there is only one Turkish dataset and it is very small compared to its English counterparts. Creating a new dataset for image captioning is a very costly and time consuming task. This work is a first step towards transferring the available, large English datasets into Turkish. We translated English captioning datasets into Turkish by using an automated translation tool and we trained an image captioning model on the automatically obtained Turkish captions. Our experiments show that this model yields the best performance so far on Turkish captioning.",
"title": ""
},
{
"docid": "8dfdd829881074dc002247c9cd38eba8",
"text": "The limited battery lifetime of modern embedded systems and mobile devices necessitates frequent battery recharging or replacement. Solar energy and small-size photovoltaic (PV) systems are attractive solutions to increase the autonomy of embedded and personal devices attempting to achieve perpetual operation. We present a battery less solar-harvesting circuit that is tailored to the needs of low-power applications. The harvester performs maximum-power-point tracking of solar energy collection under nonstationary light conditions, with high efficiency and low energy cost exploiting miniaturized PV modules. We characterize the performance of the circuit by means of simulation and extensive testing under various charging and discharging conditions. Much attention has been given to identify the power losses of the different circuit components. Results show that our system can achieve low power consumption with increased efficiency and cheap implementation. We discuss how the scavenger improves upon state-of-the-art technology with a measured power consumption of less than 1 mW. We obtain increments of global efficiency up to 80%, diverging from ideality by less than 10%. Moreover, we analyze the behavior of super capacitors. We find that the voltage across the supercapacitor may be an unreliable indicator for the stored energy under some circumstances, and this should be taken into account when energy management policies are used.",
"title": ""
},
{
"docid": "249a09e24ce502efb4669603b54b433d",
"text": "Deep Neural Networks (DNNs) are universal function approximators providing state-ofthe-art solutions on wide range of applications. Common perceptual tasks such as speech recognition, image classification, and object tracking are now commonly tackled via DNNs. Some fundamental problems remain: (1) the lack of a mathematical framework providing an explicit and interpretable input-output formula for any topology, (2) quantification of DNNs stability regarding adversarial examples (i.e. modified inputs fooling DNN predictions whilst undetectable to humans), (3) absence of generalization guarantees and controllable behaviors for ambiguous patterns, (4) leverage unlabeled data to apply DNNs to domains where expert labeling is scarce as in the medical field. Answering those points would provide theoretical perspectives for further developments based on a common ground. Furthermore, DNNs are now deployed in tremendous societal applications, pushing the need to fill this theoretical gap to ensure control, reliability, and interpretability. 1 ar X iv :1 71 0. 09 30 2v 3 [ st at .M L ] 6 N ov 2 01 7",
"title": ""
},
{
"docid": "b8cf5e3802308fe941848fea51afddab",
"text": "Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to even a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). In particular, we introduce two novel methods to create these toxic signs. First, we propose Out-of-Distribution attacks, which expand the scope of adversarial examples by enabling the adversary to generate these starting from an arbitrary point in the image space compared to prior attacks which are restricted to existing training/test data (In-Distribution). Second, we present the Lenticular Printing attack, which relies on an optical phenomenon to deceive the traffic sign recognition system. We extensively evaluate the effectiveness of the proposed attacks in both virtual and real-world settings and consider both white-box and black-box threat models. Our results demonstrate that the proposed attacks are successful under both settings and threat models. We further show that Out-of-Distribution attacks can outperform In-Distribution attacks on classifiers defended using the adversarial training defense, exposing a new attack vector for these defenses.",
"title": ""
},
{
"docid": "43e5146e4a7723cf391b013979a1da32",
"text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several examples.",
"title": ""
},
{
"docid": "0321ef8aeb0458770cd2efc35615e11c",
"text": "Entity-relationship-structured data is becoming more important on the Web. For example, large knowledge bases have been automatically constructed by information extraction from Wikipedia and other Web sources. Entities and relationships can be represented by subject-property-object triples in the RDF model, and can then be precisely searched by structured query languages like SPARQL. Because of their Boolean-match semantics, such queries often return too few or even no results. To improve recall, it is thus desirable to support users by automatically relaxing or reformulating queries in such a way that the intention of the original user query is preserved while returning a sufficient number of ranked results. In this paper we describe comprehensive methods to relax SPARQL-like triplepattern queries in a fully automated manner. Our framework produces a set of relaxations by means of statistical language models for structured RDF data and queries. The query processing algorithms merge the results of different relaxations into a unified result list, with ranking based on any ranking function for structured queries over RDF-data. Our experimental evaluation, with two different datasets about movies and books, shows the effectiveness of the automatically generated relaxations and the improved quality of query results based on assessments collected on the Amazon Mechanical Turk platform.",
"title": ""
},
{
"docid": "290b56471b64e150e40211f7a51c1237",
"text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.",
"title": ""
},
{
"docid": "4c16117954f9782b3a22aff5eb50537a",
"text": "Domain transfer is an exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels. However, most successful applications to date require the two domains to be closely related (e.g., image-to-image, video-video), utilizing similar or shared networks to transform domain-specific properties like texture, coloring, and line shapes. Here, we demonstrate that it is possible to transfer across modalities (e.g., image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces. We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (e.g., variational autoencoder and a generative adversarial network). We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space. The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations. Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions.",
"title": ""
},
{
"docid": "3b7cfe02a34014c84847eea4790037e2",
"text": "Non-technical losses (NTL) such as electricity theft cause significant harm to our economies, as in some countries they may range up to 40% of the total electricity distributed. Detecting NTLs requires costly on-site inspections. Accurate prediction of NTLs for customers using machine learning is therefore crucial. To date, related research largely ignore that the two classes of regular and non-regular customers are highly imbalanced, that NTL proportions may change and mostly consider small data sets, often not allowing to deploy the results in production. In this paper, we present a comprehensive approach to assess three NTL detection models for different NTL proportions in large real world data sets of 100Ks of customers: Boolean rules, fuzzy logic and Support Vector Machine. This work has resulted in appreciable results that are about to be deployed in a leading industry solution. We believe that the considerations and observations made in this contribution are necessary for future smart meter research in order to report their effectiveness on imbalanced and large real world data sets.",
"title": ""
},
{
"docid": "aea4b65d1c30e80e7f60a52dbecc78f3",
"text": "The aim of this paper is to automate the car and the car parking as well. It discusses a project which presents a miniature model of an automated car parking system that can regulate and manage the number of cars that can be parked in a given space at any given time based on the availability of parking spot. Automated parking is a method of parking and exiting cars using sensing devices. The entering to or leaving from the parking lot is commanded by an Android based application. We have studied some of the existing systems and it shows that most of the existing systems aren't completely automated and require a certain level of human interference or interaction in or with the system. The difference between our system and the other existing systems is that we aim to make our system as less human dependent as possible by automating the cars as well as the entire parking lot, on the other hand most existing systems require human personnel (or the car owner) to park the car themselves. To prove the effectiveness of the system proposed by us we have developed and presented a mathematical model which will be discussed in brief further in the paper.",
"title": ""
},
{
"docid": "bb94ef2ab26fddd794a5b469f3b51728",
"text": "This study examines the treatment outcome of a ten weeks dance movement therapy intervention on quality of life (QOL). The multicentred study used a subject-design with pre-test, post-test, and six months follow-up test. 162 participants who suffered from stress were randomly assigned to the dance movement therapy treatment group (TG) (n = 97) and the wait-listed control group (WG) (65). The World Health Organization Quality of Life Questionnaire 100 (WHOQOL-100) and Munich Life Dimension List were used in both groups at all three measurement points. Repeated measures ANOVA revealed that dance movement therapy participants in all QOL dimensions always more than the WG. In the short term, they significantly improved in the Psychological domain (p > .001, WHOQOL; p > .01, Munich Life Dimension List), Social relations/life (p > .10, WHOQOL; p > .10, Munich Life Dimension List), Global value (p > .05, WHOQOL), Physical health (p > .05, Munich Life Dimension List), and General life (p > .10, Munich Life Dimension List). In the long term, dance movement therapy significantly enhanced the psychological domain (p > .05, WHOQOL; p > .05, Munich Life Dimension List), Spirituality (p > .10, WHOQOL), and General life (p > .05, Munich Life Dimension List). Dance movement therapy is effective in the shortand long-term to improve QOL. © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "1b5bc53b1039f3e7aecbc8dcb2f3b9a8",
"text": "Agricultural lands occupy 37% of the earth's land surface. Agriculture accounts for 52 and 84% of global anthropogenic methane and nitrous oxide emissions. Agricultural soils may also act as a sink or source for CO2, but the net flux is small. Many agricultural practices can potentially mitigate greenhouse gas (GHG) emissions, the most prominent of which are improved cropland and grazing land management and restoration of degraded lands and cultivated organic soils. Lower, but still significant mitigation potential is provided by water and rice management, set-aside, land use change and agroforestry, livestock management and manure management. The global technical mitigation potential from agriculture (excluding fossil fuel offsets from biomass) by 2030, considering all gases, is estimated to be approximately 5500-6000Mt CO2-eq.yr-1, with economic potentials of approximately 1500-1600, 2500-2700 and 4000-4300Mt CO2-eq.yr-1 at carbon prices of up to 20, up to 50 and up to 100 US$ t CO2-eq.-1, respectively. In addition, GHG emissions could be reduced by substitution of fossil fuels for energy production by agricultural feedstocks (e.g. crop residues, dung and dedicated energy crops). The economic mitigation potential of biomass energy from agriculture is estimated to be 640, 2240 and 16 000Mt CO2-eq.yr-1 at 0-20, 0-50 and 0-100 US$ t CO2-eq.-1, respectively.",
"title": ""
},
{
"docid": "d9214591462b0780ede6d58dab42f48c",
"text": "Software testing in general and graphical user interface (GUI) testing in particular is one of the major challenges in the lifecycle of any software system. GUI testing is inherently more difficult than the traditional and command-line interface testing. Some of the factors that make GUI testing different from the traditional software testing and significantly more difficult are: a large number of objects, different look and feel of objects, many parameters associated with each object, progressive disclosure, complex inputs from multiple sources, and graphical outputs. The existing testing techniques for the creation and management of test suites need to be adapted/enhanced for GUIs, and new testing techniques are desired to make the creation and management of test suites more efficient and effective. In this article, a methodology is proposed to create test suites for a GUI. The proposed methodology organizes the testing activity into various levels. The tests created at a particular level can be reused at higher levels. This methodology extends the notion of modularity and reusability to the testing phase. The organization and management of the created test suites resembles closely to the structure of the GUI under test.",
"title": ""
},
{
"docid": "514d9326cb54cec16f4dfb05deca3895",
"text": "Photo publishing in Social Networks and other Web2.0 applications has become very popular due to the pervasive availability of cheap digital cameras, powerful batch upload tools and a huge amount of storage space. A portion of uploaded images are of a highly sensitive nature, disclosing many details of the users' private life. We have developed a web service which can detect private images within a user's photo stream and provide support in making privacy decisions in the sharing context. In addition, we present a privacy-oriented image search application which automatically identifies potentially sensitive images in the result set and separates them from the remaining pictures.",
"title": ""
}
] |
scidocsrr
|
584635cb28c385f55f258d123ff5b776
|
Androtrace: framework for tracing and analyzing IOs on Android
|
[
{
"docid": "9361c6eaa2faaa3cfebc4a073ee8f3d3",
"text": "In this paper we present the analysis of two large-scale network file system workloads. We measured CIFS traffic for two enterprise-class file servers deployed in the NetApp data center for a three month period. One file server was used by marketing, sales, and finance departments and the other by the engineering department. Together these systems represent over 22 TB of storage used by over 1500 employees, making this the first ever large-scale study of the CIFS protocol. We analyzed how our network file system workloads compared to those of previous file system trace studies and took an in-depth look at access, usage, and sharing patterns. We found that our workloads were quite different from those previously studied; for example, our analysis found increased read-write file access patterns, decreased read-write ratios, more random file access, and longer file lifetimes. In addition, we found a number of interesting properties regarding file sharing, file re-use, and the access patterns of file types and users, showing that modern file system workload has changed in the past 5–10 years. This change in workload characteristics has implications on the future design of network file systems, which we describe in the paper.",
"title": ""
},
{
"docid": "78952b9185a7fb1d8e7bd7723bb1021b",
"text": "We develop and apply two new methods for analyzing file system behavior and evaluating file system changes. First, semantic block-level analysis (SBA) combines knowledge of on-disk data structures with a trace of disk traffic to infer file syste m behavior; in contrast to standard benchmarking approaches, S BA enables users to understand why the file system behaves as it does. Second, semantic trace playback (STP) enables traces of disk traffic to be easily modified to represent changes in the fi le system implementation; in contrast to directly modifying t he file system, STP enables users to rapidly gauge the benefits of new policies. We use SBA to analyze Linux ext3, ReiserFS, JFS, and Windows NTFS; in the process, we uncover many strengths and weaknesses of these journaling file systems. We also appl y STP to evaluate several modifications to ext3, demonstratin g the benefits of various optimizations without incurring the cos ts of a real implementation.",
"title": ""
}
] |
[
{
"docid": "b414ed7d896bff259dc975bf16777fa7",
"text": "We propose in this work a general procedure to efficient EM-based design of single-layer SIW interconnects, including their transitions to microstrip lines. Our starting point is developed by exploiting available empirical knowledge for SIW. We propose an efficient SIW surrogate model for direct EM design optimization in two stages: first optimizing the SIW width to achieve the specified low cutoff frequency, followed by the transition optimization to reduce reflections and extend the dominant mode bandwidth. Our procedure is illustrated by designing a SIW interconnect on a standard FR4-based substrate.",
"title": ""
},
{
"docid": "683bad69cfb2c8980020dd1f8bd8cea4",
"text": "BRUTUS is a program that tells stories. The stories are intriguing, they hold a hint of mystery, and—not least impressive—they are written in correct English prose. An example (p. 124) is shown in Figure 1. This remarkable feat is grounded in a complex architecture making use of a number of levels, each of which is parameterized so as to become a locus of possible variation. The specific BRUTUS1 implementation that illustrates the program’s prowess exploits the theme of betrayal, which receives an elaborate analysis, culminating in a set",
"title": ""
},
{
"docid": "72cff6209ecea7538179aaf430876381",
"text": "A potential Mars Sample Return (MSR) mission would require robotic autonomous capture and manipulation of an Orbital Sample (OS) before returning the samples to Earth. An orbiter would capture the OS, manipulate to a preferential orientation, transition it through the steps required to break-the-chain with Mars, stowing it in a containment vessel or an Earth Entry Vehicle (EEV) and providing redundant containment to the OS (for example by closing and sealing the lid of the EEV). In this paper, we discuss the trade-space of concepts generated for both the individual aspects of capture and manipulation of the OS, as well as concepts for the end-to-end system. Notably, we discuss concepts for OS capture, manipulation of the OS to orient it to a preferred configuration, and steps for transitioning the OS between different stages of manipulation, ultimately securing it in a containment vessel or Earth Entry Vehicle.",
"title": ""
},
{
"docid": "04ff9fe1984fded27d638fe2552adf79",
"text": "While social networks can provide an ideal platform for upto-date information from individuals across the world, it has also proved to be a place where rumours fester and accidental or deliberate misinformation often emerges. In this article, we aim to support the task of making sense from social media data, and specifically, seek to build an autonomous message-classifier that filters relevant and trustworthy information from Twitter. For our work, we collected about 100 million public tweets, including users’ past tweets, from which we identified 72 rumours (41 true, 31 false). We considered over 80 trustworthiness measures including the authors’ profile and past behaviour, the social network connections (graphs), and the content of tweets themselves. We ran modern machine-learning classifiers over those measures to produce trustworthiness scores at various time windows from the outbreak of the rumour. Such time-windows were key as they allowed useful insight into the progression of the rumours. From our findings, we identified that our model was significantly more accurate than similar studies in the literature. We also identified critical attributes of the data that give rise to the trustworthiness scores assigned. Finally we developed a software demonstration that provides a visual user interface to allow the user to examine the analysis.",
"title": ""
},
{
"docid": "0632f4a3119246ee9cd7b858dc0c3ed4",
"text": "AIM\nIn order to improve the patients' comfort and well-being during and after a stay in the intensive care unit (ICU), the patients' perspective on the intensive care experience in terms of memories is essential. The aim of this study was to describe unpleasant and pleasant memories of the ICU stay in adult mechanically ventilated patients.\n\n\nMETHOD\nMechanically ventilated adults admitted for more than 24hours from two Swedish general ICUs were included and interviewed 5 days after ICU discharge using two open-ended questions. The data were analysed exploring the manifest content.\n\n\nFINDINGS\nOf the 250 patients interviewed, 81% remembered the ICU stay, 71% described unpleasant memories and 59% pleasant. Ten categories emerged from the content analyses (five from unpleasant and five from pleasant memories), contrasting with each other: physical distress and relief of physical distress, emotional distress and emotional well-being, perceptual distress and perceptual well-being, environmental distress and environmental comfort, and stress-inducing care and caring service.\n\n\nCONCLUSION\nMost critical care patients have both unpleasant and pleasant memories of their ICU stay. Pleasant memories such as support and caring service are important to relief the stress and may balance the impact of the distressing memories of the ICU stay.",
"title": ""
},
{
"docid": "e592ccd706b039b12cc4e724a7b217cd",
"text": "In fully distributed machine learning, privacy and security are important issues. These issues are often dealt with using secure multiparty computation (MPC). However, in our application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum of the inputs of a subset of participants assuming a semi-honest adversary. During the computation the participants learn no individual values. We apply this protocol to efficiently calculate the sum of gradients as part of a fully distributed mini-batch stochastic gradient descent algorithm. The protocol achieves scalability and robustness by exploiting the fact that in this application domain a “quick and dirty” sum computation is acceptable. In other words, speed and robustness takes precedence over precision. We analyze the protocol theoretically as well as experimentally based on churn statistics from a real smartphone trace. We derive a sufficient condition for preventing the leakage of an individual value, and we demonstrate the feasibility of the overhead of the protocol.",
"title": ""
},
{
"docid": "f7de8256c3d556a298e12cb555dd50b8",
"text": "Intrusion Detection Systems (IDSs) detects the network attacks by self-learning, etc. (9). Using Genetic Algorithms for intrusion detection has. Cloud Computing Using Genetic Algorithm. 1. Ku. To overcome this problem we are implementing intrusion detection system in which we use genetic. From Ignite at OSCON 2010, a 5 minute presentation by Bill Lavender: SNORT is popular. Based Intrusion Detection System (IDS), by applying Genetic Algorithm (GA) and Networking Using Genetic Algorithm (IDS) and Decision Tree is to identify. Intrusion Detection System Using Genetic Algorithm >>>CLICK HERE<<< Genetic algorithm (GA) has received significant attention for the design and length chromosomes (VLCs) in a GA-based network intrusion detection system. The proposed approach is tested using Defense Advanced Research Project. Abstract. Intrusion Detection System (IDS) is one of the key security components in today's networking environment. A great deal of attention has been recently. Computer security has become an important part of the day today's life. Not only single computer systems but an extensive network of the computer system. presents an overview of intrusion detection system and a hybrid technique for",
"title": ""
},
{
"docid": "3a2740b7f65841f7eb4f74a1fb3c9b65",
"text": "Getting a better understanding of user behavior is important for advancing information retrieval systems. Existing work focuses on modeling and predicting single interaction events, such as clicks. In this paper, we for the first time focus on modeling and predicting sequences of interaction events. And in particular, sequences of clicks. We formulate the problem of click sequence prediction and propose a click sequence model (CSM) that aims to predict the order in which a user will interact with search engine results. CSM is based on a neural network that follows the encoder-decoder architecture. The encoder computes contextual embeddings of the results. The decoder predicts the sequence of positions of the clicked results. It uses an attentionmechanism to extract necessary information about the results at each timestep. We optimize the parameters of CSM by maximizing the likelihood of observed click sequences. We test the effectiveness ofCSMon three new tasks: (i) predicting click sequences, (ii) predicting the number of clicks, and (iii) predicting whether or not a user will interact with the results in the order these results are presented on a search engine result page (SERP). Also, we show that CSM achieves state-of-the-art results on a standard click prediction task, where the goal is to predict an unordered set of results a user will click on.",
"title": ""
},
{
"docid": "f86078de4b011a737b6bdedd86b4e82f",
"text": "Alarm fatigue can adversely affect nurses’ efficiency and concentration on their tasks, which is a threat to patients’ safety. The purpose of the present study was to develop and test the psychometric accuracy of an alarm fatigue questionnaire for nurses. This study was conducted in two stages: in stage one, in order to establish the different aspects of the concept of alarm fatigue, the researchers reviewed the available literature—articles and books—on alarm fatigue, and then consulted several experts in a meeting to define alarm fatigue and develop statements for the questionnaire. In stage two, after the final draft had been approved, the validity of the instrument was measured using the two methods of face validity (the quantitative and qualitative approaches) and content validity (the qualitative and quantitative approaches). Test–retest, Cronbach’s alpha, and Principal Component Analysis were used for item reduction and reliability analysis. Based on the results of stage one, the researchers extracted 30 statements based on a 5-point Likert scale. In stage two, after the face and content validity of the questionnaire had been established, 19 statements were left in the instrument. Based on factor loadings of the items and “alpha if item deleted” and after the second round of consultation with the expert panel, six items were removed from the scale. The test of the reliability of nurses’ alarm fatigue questionnaire based on the internal homogeneity and retest methods yielded the following results: test–retest correlation coefficient = 0.99; Guttman split-half correlation coefficient = 0.79; Cronbach’s alpha = 0.91. Regarding the importance of recognizing alarm fatigue in nurses, there is need for an instrument to measure the phenomenon. The results of the study show that the developed questionnaire is valid and reliable enough for measuring alarm fatigue in nurses.",
"title": ""
},
{
"docid": "3b91e62d6e43172e68817f679dde5182",
"text": "We model the geodetically observed secular velocity field in northwestern Turkey with a block model that accounts for recoverable elastic-strain accumulation. The block model allows us to estimate internally consistent fault slip rates and locking depths. The northern strand of the North Anatolian fault zone (NAFZ) carries approximately four times as much right-lateral motion ( 24 mm/yr) as does the southern strand. In the Marmara Sea region, the data show strain accumulation to be highly localized. We find that a straight fault geometry with a shallow locking depth of 6–7 km fits the observed Global Positioning System velocities better than does a stepped fault geometry that follows the northern and eastern edges of the sea. This shallow locking depth suggests that the moment release associated with an earthquake on these faults should be smaller, by a factor of 2.3, than previously inferred assuming a locking depth of 15 km. Online material: an updated version of velocity-field data.",
"title": ""
},
{
"docid": "1ae7ea1102f7d32c40a0e5da0d3a8256",
"text": "Unequal access to new technology is often referred to as the \"digital divide.\" But the notion of a digital divide is unclear. This paper explores the concept by attention to prior research on information access. It considers three forms of access, to a device, to an ongoing conduit, and to new social practices, with the latter being the most encompassing and valuable. Earlier research on literacy provides a useful framework for an interpretation of the digital divide based on practices, rather than merely devices or conduits. Both literacy and technology access are multiple, context-dependent, stratified along continua, tied closely for their benefits to particular functions, and dependent on not only education and culture but also power. They also both entail new forms of semiotic interpretation and production. Research in schools illuminates the importance of a precise understanding of the digital divide. Educational reform efforts that place emphasis on a device, such as the One Laptop per Child program, have proven unsuccessful, while those that support new forms of meaning-making and social engagement bring more significant benefits.",
"title": ""
},
{
"docid": "7f2fcc4b4af761292d3f77ffd1a2f7c3",
"text": "An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.",
"title": ""
},
{
"docid": "6e01d0d9b403f8bae201baa68e04fece",
"text": "OBJECTIVE\nTo apply a mathematical model to determine the relative effectiveness of various tip-plasty maneuvers while the lateral crura are in cephalic position compared with orthotopic position.\n\n\nMETHODS\nA Matlab (MathWorks, Natick, Massachusetts) computer program, called the Tip-Plasty Simulator, was developed to model the medial and lateral crura of the tripod concept in order to estimate the change in projection, rotation, and nasal length yielded by changes in crural length. The following rhinoplasty techniques were modeled in the software program: columellar strut graft/tongue-in-groove, lateral crural steal, lateral crural overlay, medial/intermediate crural overlay, hinge release with alar strut graft, and lateral crural repositioning.\n\n\nRESULTS\nUsing the Tip-Plasty Simulator, the directionality of the change in projection, rotation, and nasal length produced by the various tip-plasty maneuvers, as shown by our mathematical model, is largely the same as that expected and observed clinically. Notably, cephalically positioned lateral crura affected the results of the rhinoplasty maneuvers studied.\n\n\nCONCLUSIONS\nBy demonstrating a difference in the magnitude of change resulting from various rhinoplasty maneuvers, the results of this study enhance the ability of the rhinoplasty surgeon to predict the effects of various tip-plasty maneuvers, given the variable range in alar cartilage orientation that he or she is likely to encounter.",
"title": ""
},
{
"docid": "4a75586965854ba2cba2fed18528e72b",
"text": "Although there have been some promising results in computer lipreading, there has been a paucity of data on which to train automatic systems. However the recent emergence of the TCDTIMIT corpus, with around 6000 words, 59 speakers and seven hours of recorded audio-visual speech, allows the deployment of more recent techniques in audio-speech such as Deep Neural Networks (DNNs) and sequence discriminative training. In this paper we combine the DNN with a Hidden Markov Model (HMM) to the, so called, hybrid DNN-HMM configuration which we train using a variety of sequence discriminative training methods. This is then followed with a weighted finite state transducer. The conclusion is that the DNN offers very substantial improvement over a conventional classifier which uses a Gaussian Mixture Model (GMM) to model the densities even when optimised with Speaker Adaptive Training. Sequence adaptive training offers further improvements depending on the precise variety employed but those improvements are of the order of 10% improvement in word accuracy. Putting these two results together implies that lipreading is moving from something of rather esoteric interest to becoming a practical reality in the foreseeable future.",
"title": ""
},
{
"docid": "9be80d8f93dd5edd72ecd759993935d6",
"text": "The excretory system regulates the chemical composition of body fluids by removing metabolic wastes and retaining the proper amount of water, salts and nutrients. The invertebrate excretory structures are classified in according to their marked variations in the morphological structures into three types included contractile vacuoles in protozoa, nephridia (flame cell system) in most invertebrate animals and Malpighian tubules (arthropod kidney) in insects [2]. There are three distinct excretory organs formed in succession during the development of the vertebrate kidney, they are called pronephros, mesonephros and metanephros. The pronephros is the most primitive one and exists as a functional kidney only in some of the lowest fishes and is called the archinephros. The mesonephros represents the functional excretory organs in anamniotes and called as opisthonephros. The metanephros is the most caudally located of the excretory organs and the last to appear, it represents the functional kidney in amniotes [2-4].",
"title": ""
},
{
"docid": "f37d32a668751198ed8acde8ab3bdc12",
"text": "INTRODUCTION\nAlthough the critical feature of attention-deficit/hyperactivity disorder (ADHD) is a persistent pattern of inattention and/or hyperactivity/impulsivity behavior, the disorder is clinically heterogeneous, and concomitant difficulties are common. Children with ADHD are at increased risk for experiencing lifelong impairments in multiple domains of daily functioning. In the present study we aimed to build a brief ADHD impairment-related tool -ADHD concomitant difficulties scale (ADHD-CDS)- to assess the presence of some of the most important comorbidities that usually appear associated with ADHD such as emotional/motivational management, fine motor coordination, problem-solving/management of time, disruptive behavior, sleep habits, academic achievement and quality of life. The two main objectives of the study were (i) to discriminate those profiles with several and important ADHD functional difficulties and (ii) to create a brief clinical tool that fosters a comprehensive evaluation process and can be easily used by clinicians.\n\n\nMETHODS\nThe total sample included 399 parents of children with ADHD aged 6-18 years (M = 11.65; SD = 3.1; 280 males) and 297 parents of children without a diagnosis of ADHD (M = 10.91; SD = 3.2; 149 male). The scale construction followed an item improved sequential process.\n\n\nRESULTS\nFactor analysis showed a 13-item single factor model with good fit indices. Higher scores on inattention predicted higher scores on ADHD-CDS for both the clinical sample (β = 0.50; p < 0.001) and the whole sample (β = 0.85; p < 0.001). The ROC curve for the ADHD-CDS (against the ADHD diagnostic status) gave an area under the curve (AUC) of.979 (95%, CI = [0.969, 0.990]).\n\n\nDISCUSSION\nThe ADHD-CDS has shown preliminary adequate psychometric properties, with high convergent validity and good sensitivity for different ADHD profiles, which makes it a potentially appropriate and brief instrument that may be easily used by clinicians, researchers, and health professionals in dealing with ADHD.",
"title": ""
},
{
"docid": "6465b2af36350a444fbc6682540ff21d",
"text": "We present an algorithm for finding an <i>s</i>-sparse vector <i>x</i> that minimizes the <i>square-error</i> ∥<i>y</i> -- Φ<i>x</i>∥<sup>2</sup> where Φ satisfies the <i>restricted isometry property</i> (RIP), with <i>isometric constant</i> Δ<sub>2<i>s</i></sub> < 1/3. Our algorithm, called <b>GraDeS</b> (Gradient Descent with Sparsification) iteratively updates <i>x</i> as: [EQUATION]\n where γ > 1 and <i>H<sub>s</sub></i> sets all but <i>s</i> largest magnitude coordinates to zero. <b>GraDeS</b> converges to the correct solution in constant number of iterations. The condition Δ<sub>2<i>s</i></sub> < 1/3 is most general for which a <i>near-linear time</i> algorithm is known. In comparison, the best condition under which a polynomial-time algorithm is known, is Δ<sub>2<i>s</i></sub> < √2 -- 1.\n Our Matlab implementation of <b>GraDeS</b> outperforms previously proposed algorithms like Subspace Pursuit, StOMP, OMP, and Lasso by an order of magnitude. Curiously, our experiments also uncovered cases where L1-regularized regression (Lasso) fails but <b>GraDeS</b> finds the correct solution.",
"title": ""
},
{
"docid": "7adbcbcf5d458087d6f261d060e6c12b",
"text": "Operation of MOS devices in the strong, moderate, and weak inversion regions is considered. The advantages of designing the input differential stage of a CMOS op amp to operate in the weak or moderate inversion region are presented. These advantages include higher voltage gain, less distortion, and ease of compensation. Specific design guidelines are presented to optimize amplifier performance. Simulations that demonstrate the expected improvements are given.",
"title": ""
},
{
"docid": "9ee1765f945c8164af6e09a836402e3e",
"text": "0167-8655/$ see front matter 2012 Elsevier B.V. A http://dx.doi.org/10.1016/j.patrec.2012.05.019 ⇑ Corresponding author at: Instituto Superior de E Portugal. E-mail address: [email protected] (A.J. Ferreira). Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be computationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10 features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster. 2012 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
b288909b35f845a172e7d4e5a7793e3a
|
A TEACHER STUDENT NETWORK FOR FASTER VIDEO CLASSIFICATION
|
[
{
"docid": "48c03a33c5d34b246dce4932ef0fa16e",
"text": "We present a solution to “Google Cloud and YouTube8M Video Understanding Challenge” that ranked 5th place. The proposed model is an ensemble of three model families, two frame level and one video level. The training was performed on augmented dataset, with cross validation.",
"title": ""
},
{
"docid": "87b67f9ed23c27a71b6597c94ccd6147",
"text": "Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.",
"title": ""
}
] |
[
{
"docid": "a2b199daef2ba734700531f41ab42fdb",
"text": "Joint object detection and semantic segmentation can be applied to many fields, such as self-driving cars and unmanned surface vessels. An initial and important progress towards this goal has been achieved by simply sharing the deep convolutional features for the two tasks. However, this simple scheme is unable to make full use of the fact that detection and segmentation are mutually beneficial. To overcome this drawback, we propose a framework called TripleNet where triple supervisions including detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder network. Classagnostic segmentation supervision provides an objectness prior knowledge for both semantic segmentation and object detection. Besides the three types of supervisions, two light-weight modules (i.e., inner-connected module and attention skip-layer fusion) are also incorporated into each layer of the decoder. In the proposed framework, detection and segmentation can sufficiently boost each other. Moreover, class-agnostic and class-aware segmentation on each decoder layer are not performed at the test stage. Therefore, no extra computational costs are introduced at the test stage. Experimental results on the VOC2007 and VOC2012 datasets demonstrate that the proposed TripleNet is able to improve both the detection and segmentation accuracies without adding extra computational costs.",
"title": ""
},
{
"docid": "06c4388fb519484577d5c5556f369263",
"text": "This paper proposes new research themes concerning decision support in intermodal transport. Decision support models have been constructed for private stakeholders (e.g. network operators, drayage operators, terminal operators or intermodal operators) as well as for public actors such as policy makers and port authorities. Intermodal research topics include policy support, terminal network design, intermodal service network design, intermodal routing, drayage operations and ICT innovations. For each research topic, the current state of the art and gaps in existing models are identified. Current trends in intermodal decision support models include the introduction of environmental concerns, the development of dynamic models and the growth in innovative applications of Operations Research techniques. Limited data availability and problem size (network scale) and related computational considerations are issues which increase the complexity of decision support in intermodal transport. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a3b4ef83e513e7541cd6c1517bf0f605",
"text": "All cellular proteins undergo continuous synthesis and degradation. This permanent renewal is necessary to maintain a functional proteome and to allow rapid changes in levels of specific proteins with regulatory purposes. Although for a long time lysosomes were considered unable to contribute to the selective degradation of individual proteins, the discovery of chaperone-mediated autophagy (CMA) changed this notion. Here, we review the characteristics that set CMA apart from other types of lysosomal degradation and the subset of molecules that confer cells the capability to identify individual cytosolic proteins and direct them across the lysosomal membrane for degradation.",
"title": ""
},
{
"docid": "42bf428e3c6a4b3c4cb46a2735de872d",
"text": "We have developed a low cost software radio based platform for monitoring EPC Gen 2 RFID traffic. The Gen 2 standard allows for a range of PHY layer configurations and does not specify exactly how to compose protocol messages to inventory tags. This has made it difficult to know how well the standard works, and how it is implemented in practice. Our platform provides much needed visibility into Gen 2 systems by capturing reader transmissions using the USRP2 and decoding them in real-time using software we have developed and released to the public. In essence, our platform delivers much of the functionality of expensive (< $50,000) conformance testing products, with greater extensibility at a small fraction of the cost. In this paper, we present the design and implementation of the platform and evaluate its effectiveness, showing that it has better than 99% accuracy up to 3 meters. We then use the platform to study a commercial RFID reader, showing how the Gen 2 standard is realized, and indicate avenues for research at both the PHY and MAC layers.",
"title": ""
},
{
"docid": "44fdf1c17ebda2d7b2967c84361a5d9a",
"text": "A high-efficiency power amplifier (PA) is important in a Megahertz wireless power transfer (WPT) system. It is attractive to apply the Class-E PA for its simple structure and high efficiency. However, the conventional design for Class-E PA can only ensure a high efficiency for a fixed load. It is necessary to develop a high-efficiency Class-E PA for a wide-range load in WPT systems. A novel design method for Class-E PA is proposed to achieve this objective in this paper. The PA achieves high efficiency, above 80%, for a load ranging from 10 to 100 Ω at 6.78 MHz in the experiment.",
"title": ""
},
{
"docid": "c2fd86b36364ac9c40e873176443c4c8",
"text": "In a public service announcement on 17 March 2016, the Federal Bureau of Investigation jointly with the U.S. Department of Transportation and the National Highway Traffic Safety Administration (NHTSA) released a warning regarding the increasing vulnerability of motor vehicles to remote exploits [18]. Engine shutdowns, disabled brakes, and locked doors are a few examples of possible vehicle cybersecurity attacks. Modern cars grow into a new target for cyberattacks as they become increasingly connected. While driving on the road, sharks (i.e., hackers) need only to be within communication range of a vehicle to attack it. However, in some cases, they can hack into it while they are miles away. In this article, we aim to illuminate the latest vehicle cybersecurity threats including malware attacks, on-board diagnostic (OBD) vulnerabilities, and automobile apps threats. We illustrate the in-vehicle network architecture and demonstrate the latest defending mechanisms designed to mitigate such threats.",
"title": ""
},
{
"docid": "bb416322f9ce64045f2bd98cfeacb715",
"text": "This abstract presents our preliminary results on development of a cognitive assistant system for emergency response that aims to improve situational awareness and safety of first responders. This system integrates a suite of smart wearable sensors, devices, and analytics for real-time collection and analysis of in-situ data from incident scene and providing dynamic data-driven insights to responders on the most effective response actions to take.",
"title": ""
},
{
"docid": "04f1893ab7bd601bf1977558f480494d",
"text": "This paper describes a method for generative player modeling and its application to the automatic testing of game content using archetypal player models called procedural personas. Theoretically grounded in psychological decision theory, procedural personas are implemented using a variation of Monte Carlo Tree Search (MCTS) where the node selection criteria are developed using evolutionary computation, replacing the standard UCB1 criterion of MCTS. Using these personas we demonstrate how generative player models can be applied to a varied corpus of game levels and demonstrate how different play styles can be enacted in each level. In short, we use artificially intelligent personas to construct synthetic playtesters. The proposed approach could be used as a tool for automatic play testing when human feedback is not readily available or when quick visualization of potential interactions is necessary. Possible applications include interactive tools during game development or procedural content generation systems where many evaluations must be conducted within a short time span.",
"title": ""
},
{
"docid": "1cecb4765c865c0f44c76f5ed2332c13",
"text": "Speaker indexing or diarization is an important task in audio processing and retrieval. Speaker diarization is the process of labeling a speech signal with labels corresponding to the identity of speakers. This paper includes a comprehensive review on the evolution of the technology and different approaches in speaker indexing and tries to offer a fully detailed discussion on these approaches and their contributions. This paper reviews the most common features for speaker diarization in addition to the most important approaches for speech activity detection (SAD) in diarization frameworks. Two main tasks of speaker indexing are speaker segmentation and speaker clustering. This paper includes a separate review on the approaches proposed for these subtasks. However, speaker diarization systems which combine the two tasks in a unified framework are also introduced in this paper. Another discussion concerns the approaches for online speaker indexing which has fundamental differences with traditional offline approaches. Other parts of this paper include an introduction on the most common performance measures and evaluation datasets. To conclude this paper, a complete framework for speaker indexing is proposed, which is aimed to be domain independent and parameter free and applicable for both online and offline applications. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "516eb5f2160659cb1ef57a5a826efc64",
"text": "To describe physical activity (PA) and sedentary behavior (SB) patterns before and during pregnancy among Chinese, Malay and Indian women. In addition, to investigate determinants of change in PA and SB during pregnancy. The Growing Up in Singapore Towards healthy Outcomes cohort recruited first trimester pregnant women. PA and SB (sitting time and television time) before and during pregnancy were assessed as a part of an interview questionnaire at weeks 26–28 gestational clinic visit. Total energy expenditure (TEE) on PA and time in SB were calculated. Determinants of change in PA and SB were investigated using multiple logistic regression analysis. PA and SB questions were answered by 94 % (n = 1171) of total recruited subjects. A significant reduction in TEE was observed from before to during pregnancy [median 1746.0–1039.5 metabolic equivalent task (MET) min/week, p < 0.001]. The proportion of women insufficiently active (<600 MET-min/week) increased from 19.0 to 34.1 % (p < 0.001). Similarly, sitting time (median 56.0–63.0 h/week, p < 0.001) and television time (mean 16.1–16.7 h/week, p = 0.01) increased. Women with higher household income, lower level of perceived health, nausea/vomiting during pregnancy and higher level of pre-pregnancy PA were more likely to reduce PA. Women with children were less likely to reduce PA. Women reporting nausea/vomiting and lower level of pre-pregnancy sitting time were more likely to increase sitting time. Participants substantially reduced PA and increased SB by 26–28 weeks of pregnancy. Further research is needed to better understand determinants of change in PA and SB and develop effective health promotion strategies.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "3bc9eb46e389b7be4141950142c606dd",
"text": "Within this contribution, we outline the use of the new automation standards family OPC Unified Architecture (IEC 62541) in scope with the IEC 61850 field automation standard. The IEC 61850 provides both an abstract data model and an abstract communication interface. Different technology mappings to implement the model exist. With the upcoming OPC UA, a new communication model to implement abstract interfaces has been introduced. We outline its use in this contribution and also give examples on how it can be used alongside the IEC 61970 Common Information Model to properly integrate ICT and field automation at communication standards level.",
"title": ""
},
{
"docid": "982ee984dda5930b025ac93749c3cf3f",
"text": "We present an application for the simulation of errors in storage systems. The software is completely parameterizable in order to simulate different types of disk errors and disk array configurations. It can be used to verify and optimize error correction schemes for storage. Realistic simulation of disk errors is a complex task as many test rounds need to be performed in order to characterize the performance of an algorithm based on highly sporadic errors under a large variety of parameters. The software allows different levels of abstraction to perform quick tests for rough estimations as well as detailed configurations for more realistic but complex simulation runs. We believe that this simulation software is the first one that is able to cover a complete range of disk error types in many commonly used disk array configurations.",
"title": ""
},
{
"docid": "a60436a4b4152fbfef04b5c09f740636",
"text": "Detection of surgical instruments plays a key role in ensuring patient safety in minimally invasive surgery. In this paper, we present a novel method for 2D vision-based recognition and pose estimation of surgical instruments that generalizes to different surgical applications. At its core, we propose a novel scene model in order to simultaneously recognize multiple instruments as well as their parts. We use a Convolutional Neural Network architecture to embody our model and show that the cross-entropy loss is well suited to optimize its parameters which can be trained in an end-to-end fashion. An additional advantage of our approach is that instrument detection at test time is achieved while avoiding the need for scale-dependent sliding window evaluation. This allows our approach to be relatively parameter free at test time and shows good performance for both instrument detection and tracking. We show that our approach surpasses state-of-the-art results on in-vivo retinal microsurgery image data, as well as ex-vivo laparoscopic sequences.",
"title": ""
},
{
"docid": "737ef89cc5f264dcb13be578129dca64",
"text": "We present a new approach to extracting keyphrases based on statistical language models. Our approach is to use pointwise KL-divergence between multiple language models for scoring both phraseness and informativeness, which can be unified into a single score to rank extracted phrases.",
"title": ""
},
{
"docid": "20b00a2cc472dfec851f4aea42578a9e",
"text": "The self-regulatory strength model maintains that all acts of self-regulation, self-control, and choice result in a state of fatigue called ego-depletion. Self-determination theory differentiates between autonomous regulation and controlled regulation. Because making decisions represents one instance of self-regulation, the authors also differentiate between autonomous choice and controlled choice. Three experiments support the hypothesis that whereas conditions representing controlled choice would be egodepleting, conditions that represented autonomous choice would not. In Experiment 3, the authors found significant mediation by perceived self-determination of the relation between the choice condition (autonomous vs. controlled) and ego-depletion as measured by performance.",
"title": ""
},
{
"docid": "99a7cab192f636c940cfbe0b57d42ab3",
"text": "In this paper we propose a competition learning approach to coreference resolution. Traditionally, supervised machine learning approaches adopt the singlecandidate model. Nevertheless the preference relationship between the antecedent candidates cannot be determined accurately in this model. By contrast, our approach adopts a twin-candidate learning model. Such a model can present the competition criterion for antecedent candidates reliably, and ensure that the most preferred candidate is selected. Furthermore, our approach applies a candidate filter to reduce the computational cost and data noises during training and resolution. The experimental results on MUC-6 and MUC-7 data set show that our approach can outperform those based on the singlecandidate model.",
"title": ""
},
{
"docid": "96c30be2e528098e86b84b422d5a786a",
"text": "The LSTM is a popular neural network model for modeling or analyzing the time-varying data. The main operation of LSTM is a matrix-vector multiplication and it becomes sparse (spMxV) due to the widely-accepted weight pruning in deep learning. This paper presents a new sparse matrix format, named CBSR, to maximize the inference speed of the LSTM accelerator. In the CBSR format, speed-up is achieved by balancing out the computation loads over PEs. Along with the new format, we present a simple network transformation to completely remove the hardware overhead incurred when using the CBSR format. Also, the detailed analysis on the impact of network size or the number of PEs is performed, which lacks in the prior work. The simulation results show 16∼38% improvement in the system performance compared to the well-known CSC/CSR format. The power analysis is also performed in 65nm CMOS technology to show 9∼22% energy savings.",
"title": ""
},
{
"docid": "54d9985cd849605eb1c4c1369fc734cb",
"text": "Arjan Graybill Clinical Profile of the Juvenile Delinquent 1999 Dr. J. Klanderman Seminar in School Psychology This study attempted to explore the relationship that a juvenile delinquent has with three major influences: school, peers, and family. It was hypothesized that juvenile delinquents possess a poor relationship with these influences. Subjects were administered a survey which assesses the relationship with school, peers and family. 19 inmates in a juvenile detention center were administered the survey. There were 15 subjects in the control group who were administered the survey as well. Results from independent tscores reveal a significant difference in the relationship with school, peers, and family for the two groups. Juvenile delinquents were found to have a poor relationship with these major influences.",
"title": ""
},
{
"docid": "fb66a74a7cb4aa27556b428e378353a8",
"text": "This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Abstract—High-resolution radar sensors are able to resolve multiple measurements per object and therefore provide valuable information for vehicle environment perception. For instance, multiple measurements allow to infer the size of an object or to more precisely measure the object’s motion. Yet, the increased amount of data raises the demands on tracking modules: measurement models that are able to process multiple measurements for an object are necessary and measurement-toobject associations become more complex. This paper presents a new variational radar model for vehicles and demonstrates how this model can be incorporated in a Random-Finite-Setbased multi-object tracker. The measurement model is learned from actual data using variational Gaussian mixtures and avoids excessive manual engineering. In combination with the multiobject tracker, the entire process chain from the raw measurements to the resulting tracks is formulated probabilistically. The presented approach is evaluated on experimental data and it is demonstrated that data-driven measurement model outperforms a manually designed model.",
"title": ""
}
] |
scidocsrr
|
3ee9b35fae07c6267bd512e7df10f572
|
Fast and Scalable Expansion of Natural Language Understanding Functionality for Intelligent Agents
|
[
{
"docid": "c7059c650323a08ac7453ad4185e6c4f",
"text": "Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain. It is particularly important to neural networks, which are very likely to be overfitting. In some fields like image processing, many studies have shown the effectiveness of neural network-based transfer learning. For neural NLP, however, existing studies have only casually applied transfer learning, and conclusions are inconsistent. In this paper, we conduct systematic case studies and provide an illuminating picture on the transferability of neural networks in NLP.1",
"title": ""
}
] |
[
{
"docid": "c0d4538f34499d19f14c3adba8527280",
"text": "OBJECTIVE\nTo consider the use of the diagnostic category 'complex posttraumatic stress disorder' (c-PTSD) as detailed in the forthcoming ICD-11 classification system as a less stigmatising, more clinically useful term, instead of the current DSM-5 defined condition of 'borderline personality disorder' (BPD).\n\n\nCONCLUSIONS\nTrauma, in its broadest definition, plays a key role in the development of both c-PTSD and BPD. Given this current lack of differentiation between these conditions, and the high stigma faced by people with BPD, it seems reasonable to consider using the diagnostic term 'complex posttraumatic stress disorder' to decrease stigma and provide a trauma-informed approach for BPD patients.",
"title": ""
},
{
"docid": "f0b32c584029cd407fd350ddd9d00e70",
"text": "Irregular and dynamic parallel applications pose significant challenges to achieving scalable performance on large-scale multicore clusters. These applications often require ongoing, dynamic load balancing in order to maintain efficiency. Scalable dynamic load balancing on large clusters is a challenging problem which can be addressed with distributed dynamic load balancing systems. Work stealing is a popular approach to distributed dynamic load balancing; however its performance on large-scale clusters is not well understood. Prior work on work stealing has largely focused on shared memory machines. In this work we investigate the design and scalability of work stealing on modern distributed memory systems. We demonstrate high efficiency and low overhead when scaling to 8,192 processors for three benchmark codes: a producer-consumer benchmark, the unbalanced tree search benchmark, and a multiresolution analysis kernel.",
"title": ""
},
{
"docid": "3ea6de664a7ac43a1602b03b46790f0a",
"text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.",
"title": ""
},
{
"docid": "0c7e7491fbf8506d7a3d11e526b509d3",
"text": "While keystream reuse in stream ciphers and one-time pads has been a well known problem for several decades, the risk to real systems has been underappreciated. Previous techniques have relied on being able to accurately guess words and phrases that appear in one of the plaintext messages, making it far easier to claim that \"an attacker would never be able to do that.\" In this paper, we show how an adversary can automatically recover messages encrypted under the same keystream if only the type of each message is known (e.g. an HTML page in English). Our method, which is related to HMMs, recovers the most probable plaintext of this type by using a statistical language model and a dynamic programming algorithm. It produces up to 99% accuracy on realistic data and can process ciphertexts at 200ms per byte on a $2,000 PC. To further demonstrate the practical effectiveness of the method, we show that our tool can recover documents encrypted by Microsoft Word 2002 [22].",
"title": ""
},
{
"docid": "3ae81a471cce55f5da01aba9653d1bff",
"text": "In Attribute-based Encryption (ABE) scheme, attributes play a very important role. Attributes have been exploited to generate a public key for encrypting data and have been used as an access policy to control users’ access. The access policy can be categorized as either key-policy or ciphertext-policy. The key-policy is the access structure on the user’s private key, and the ciphertext-policy is the access structure on the ciphertext. And the access structure can also be categorized as either monotonic or non-monotonic one. Using ABE schemes can have the advantages: (1) to reduce the communication overhead of the Internet, and (2) to provide a fine-grained access control. In this paper, we survey a basic attribute-based encryption scheme, two various access policy attributebased encryption schemes, and two various access structures, which are analyzed for cloud environments. Finally, we list the comparisons of these schemes by some criteria for cloud environments.",
"title": ""
},
{
"docid": "74c6600ea1027349081c08c687119ee3",
"text": "Segmentation of clitics has been shown to improve accuracy on a variety of Arabic NLP tasks. However, state-of-the-art Arabic word segmenters are either limited to formal Modern Standard Arabic, performing poorly on Arabic text featuring dialectal vocabulary and grammar, or rely on linguistic knowledge that is hand-tuned for each dialect. We extend an existing MSA segmenter with a simple domain adaptation technique and new features in order to segment informal and dialectal Arabic text. Experiments show that our system outperforms existing systems on broadcast news and Egyptian dialect, improving segmentation F1 score on a recently released Egyptian Arabic corpus to 92.09%, compared to 91.60% for another segmenter designed specifically for Egyptian Arabic.",
"title": ""
},
{
"docid": "807564cfc2e90dee21a3efd8dc754ba3",
"text": "The present paper reports two studies designed to test the Dualistic Model of Passion with regard to performance attainment in two fields of expertise. Results from both studies supported the Passion Model. Harmonious passion was shown to be a positive source of activity investment in that it directly predicted deliberate practice (Study 1) and positively predicted mastery goals which in turn positively predicted deliberate practice (Study 2). In turn, deliberate practice had a direct positive impact on performance attainment. Obsessive passion was shown to be a mixed source of activity investment. While it directly predicted deliberate practice (Study 1) and directly predicted mastery goals (which predicted deliberate practice), it also predicted performance-avoidance and performance-approach goals, with the former having a tendency to facilitate performance directly, and the latter to directly negatively impact on performance attainment (Study 2). Finally, harmonious passion was also positively related to subjective well-being (SWB) in both studies, while obsessive passion was either unrelated (Study 1) or negatively related to SWB (Study 2). The conceptual and applied implications of the differential influences of harmonious and obsessive passion in performance are discussed.",
"title": ""
},
{
"docid": "df88873bdef2ad38a7b2157d6c4c2324",
"text": "Software Testing is a challenging activity for many software engineering projects and it is one of the five main technical activity areas of the software engineering lifecycle that still poses substantial challenges. Testing software requires enough resources and budget to complete it successfully. But most of the organizations face the challenges to provide enough resources to test their software in distributed environment, with different loading level. This leads to severe problem when the software deployed into different client environment and varying user load. Cloud computing is a one of the emerging technology which opens new door for software testing. This paper investigates the software testing in cloud platform which includes cloud testing models, recent research work, commercial tools and research issues.",
"title": ""
},
{
"docid": "d261c284cc4c959b525ceae9f7cfb00c",
"text": "Innate lymphoid cells (ILCs) were first described as playing important roles in the development of lymphoid tissues and more recently in the initiation of inflammation at barrier surfaces in response to infection or tissue damage. It has now become apparent that ILCs play more complex roles throughout the duration of immune responses, participating in the transition from innate to adaptive immunity and contributing to chronic inflammation. The proximity of ILCs to epithelial surfaces and their constitutive strategic positioning in other tissues throughout the body ensures that, in spite of their rarity, ILCs are able to regulate immune homeostasis effectively. Dysregulation of ILC function might result in chronic pathologies such as allergies, autoimmunity, and inflammation. A new role for ILCs in the maintenance of metabolic homeostasis has started to emerge, underlining their importance in fundamental physiological processes beyond infection and immunity.",
"title": ""
},
{
"docid": "bb2c1b4b08a25df54fbd46eaca138337",
"text": "The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels. We show that the neighbourhoods of the mapped elements are strongly polluted by hubs, vectors that tend to be near a high proportion of items, pushing their correct labels down the neighbour list. After illustrating the problem empirically, we propose a simple method to correct it by taking the proximity distribution of potential neighbours across many mapped vectors into account. We show that this correction leads to consistent improvements in realistic zero-shot experiments in the cross-lingual, image labeling and image retrieval domains.",
"title": ""
},
{
"docid": "df679dcd213842a786c1ad9587c66f77",
"text": "The statistics of professional sports, including players and teams, provide numerous opportunities for research. Cricket is one of the most popular team sports, with billions of fans all over the world. In this thesis, we address two problems related to the One Day International (ODI) format of the game. First, we propose a novel method to predict the winner of ODI cricket matches using a team-composition based approach at the start of the match. Second, we present a method to quantitatively assess the performances of individual players in a match of ODI cricket which incorporates the game situations under which the players performed. The player performances are further used to predict the player of the match award. Players are the fundamental unit of a team. Players of one team work against the players of the opponent team in order to win a match. The strengths and abilities of the players of a team play a key role in deciding the outcome of a match. However, a team changes its composition depending on the match conditions, venue, and opponent team, etc. Therefore, we propose a novel dynamic approach which takes into account the varying strengths of the individual players and reflects the changes in player combinations over time. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual players’ batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Using the relative strength of one team versus the other, along with two player-independent features, namely, the toss outcome and the venue of the match, we evaluate multiple supervised machine learning algorithms to predict the winner of the match. We show that, for our approach, the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers. Players have multiple roles in a game of cricket, predominantly as batsmen and bowlers. Over the generations, statistics such as batting and bowling averages, and strike and economy rates have been used to judge the performance of individual players. These measures, however, do not take into consideration the context of the game in which a player performed across the course of a match. Further, these types of statistics are incapable of comparing the performance of players across different roles. Therefore, we present an approach to quantitatively assess the performances of individual players in a single match of ODI cricket. We have developed a new measure, called the Work Index, which represents the amount of work that is yet to be done by a team to achieve its target. Our approach incorporates game situations and the team strengths to measure the player contributions. This not only helps us in",
"title": ""
},
{
"docid": "e1651c1f329b8caa53e5322be5bf700b",
"text": "Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths in order to promote the learning performance of individual learners. However, most personalized e-learning systems usually neglect to consider if learner ability and the difficulty level of the recommended courseware are matched to each other while performing personalized learning services. Moreover, the problem of concept continuity of learning paths also needs to be considered while implementing personalized curriculum sequencing because smooth learning paths enhance the linked strength between learning concepts. Generally, inappropriate courseware leads to learner cognitive overload or disorientation during learning processes, thus reducing learning performance. Therefore, compared to the freely browsing learning mode without any personalized learning path guidance used in most web-based learning systems, this paper assesses whether the proposed genetic-based personalized e-learning system, which can generate appropriate learning paths according to the incorrect testing responses of an individual learner in a pre-test, provides benefits in terms of learning performance promotion while learning. Based on the results of pre-test, the proposed genetic-based personalized e-learning system can conduct personalized curriculum sequencing through simultaneously considering courseware difficulty level and the concept continuity of learning paths to support web-based learning. Experimental results indicated that applying the proposed genetic-based personalized e-learning system for web-based learning is superior to the freely browsing learning mode because of high quality and concise learning path for individual learners. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fdd01ae46b9c57eada917a6e74796141",
"text": "This paper presents a high-level discussion of dexterity in robotic systems, focusing particularly on manipulation and hands. While it is generally accepted in the robotics community that dexterity is desirable and that end effectors with in-hand manipulation capabilities should be developed, there has been little, if any, formal description of why this is needed, particularly given the increased design and control complexity required. This discussion will overview various definitions of dexterity used in the literature and highlight issues related to specific metrics and quantitative analysis. It will also present arguments regarding why hand dexterity is desirable or necessary, particularly in contrast to the capabilities of a kinematically redundant arm with a simple grasper. Finally, we overview and illustrate the various classes of in-hand manipulation, and review a number of dexterous manipulators that have been previously developed. We believe this work will help to revitalize the dialogue on dexterity in the manipulation community and lead to further formalization of the concepts discussed here.",
"title": ""
},
{
"docid": "1c576cf604526b448f0264f2c39f705a",
"text": "This paper introduces a high-security post-quantum stateless hash-based signature scheme that signs hundreds of messages per second on a modern 4-core 3.5GHz Intel CPU. Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB. The signature scheme is designed to provide long-term 2 security even against attackers equipped with quantum computers. Unlike most hash-based designs, this signature scheme is stateless, allowing it to be a drop-in replacement for current signature schemes.",
"title": ""
},
{
"docid": "19548ee85a25f7536783e480e6d80b3b",
"text": "A family of two-phase interleaved LLC (iLLC) resonant converter with hybrid rectifier is proposed for wide output voltage range applications. The primary sides of the two LLC converters are in parallel, and the connection of the secondary windings in the two LLC converters can be regulated by the hybrid rectifier according to the output voltage. Variable frequency control is employed to regulate the output voltage and the secondary windings are in series when the output voltage is high. Fixed-frequency phase-shift control is adopted to regulate the configuration of the secondary windings as well as the output voltage when the output voltage is low. The output voltage range is extended by adaptively changing the configuration of the hybrid rectifier, which results in reduced switching frequency range, circulating current, and conduction losses of the LLC resonant tank. Zero voltage switching and zero current switching are achieved for all the active switches and diodes, respectively, within the entire operation range. The operation principles are analyzed and a 3.5 kW prototype with 400 V input voltage and 150–500 V output voltage is built and tested to evaluate the feasibility of the proposed method.",
"title": ""
},
{
"docid": "f4617250b5654a673219d779952db35f",
"text": "Convolutional neural network (CNN) models have achieved tremendous success in many visual detection and recognition tasks. Unfortunately, visual tracking, a fundamental computer vision problem, is not handled well using the existing CNN models, because most object trackers implemented with CNN do not effectively leverage temporal and contextual information among consecutive frames. Recurrent neural network (RNN) models, on the other hand, are often used to process text and voice data due to their ability to learn intrinsic representations of sequential and temporal data. Here, we propose a novel neural network tracking model that is capable of integrating information over time and tracking a selected target in video. It comprises three components: a CNN extracting best tracking features in each video frame, an RNN constructing video memory state, and a reinforcement learning (RL) agent making target location decisions. The tracking problem is formulated as a decision-making process, and our model can be trained with RL algorithms to learn good tracking policies that pay attention to continuous, inter-frame correlation and maximize tracking performance in the long run. We compare our model with an existing neural-network based tracking method and show that the proposed tracking approach works well in various scenarios by performing rigorous validation experiments on artificial video sequences with ground truth. To the best of our knowledge, our tracker is the first neural-network tracker that combines convolutional and recurrent networks with RL algorithms.",
"title": ""
},
{
"docid": "d300119f7e25b4252d7212ca42b32fb3",
"text": "Various computational procedures or constraint-based methods for data repairing have been proposed over the last decades to identify errors and, when possible, correct them. However, these approaches have several limitations including the scalability and quality of the values to be used in replacement of the errors. In this paper, we propose a new data repairing approach that is based on maximizing the likelihood of replacement data given the data distribution, which can be modeled using statistical machine learning techniques. This is a novel approach combining machine learning and likelihood methods for cleaning dirty databases by value modification. We develop a quality measure of the repairing updates based on the likelihood benefit and the amount of changes applied to the database. We propose SCARE (SCalable Automatic REpairing), a systematic scalable framework that follows our approach. SCARE relies on a robust mechanism for horizontal data partitioning and a combination of machine learning techniques to predict the set of possible updates. Due to data partitioning, several updates can be predicted for a single record based on local views on each data partition. Therefore, we propose a mechanism to combine the local predictions and obtain accurate final predictions. Finally, we experimentally demonstrate the effectiveness, efficiency, and scalability of our approach on real-world datasets in comparison to recent data cleaning approaches.",
"title": ""
},
{
"docid": "7519e3a8326e2ef2ebd28c22e80c4e34",
"text": "This paper presents a synthetic framework identifying the central drivers of start-up commercialization strategy and the implications of these drivers for industrial dynamics. We link strategy to the commercialization environment – the microeconomic and strategic conditions facing a firm that is translating an \" idea \" into a value proposition for customers. The framework addresses why technology entrepreneurs in some environments undermine established firms, while others cooperate with incumbents and reinforce existing market power. Our analysis suggests that competitive interaction between start-up innovators and established firms depends on the presence or absence of a \" market for ideas. \" By focusing on the operating requirements, efficiency, and institutions associated with markets for ideas, this framework holds several implications for the management of high-technology entrepreneurial firms. (Stern). We would like to thank the firms who participate in the MIT Commercialization Strategies survey for their time and effort. The past two decades have witnessed a dramatic increase in investment in technology entrepreneurship – the founding of small, start-up firms developing inventions and technology with significant potential commercial application. Because of their youth and small size, start-up innovators usually have little experience in the markets for which their innovations are most appropriate, and they have at most two or three technologies at the stage of potential market introduction. For these firms, a key management challenge is how to translate promising",
"title": ""
},
{
"docid": "a7b8986dbfde4a7ccc3a4ad6e07319a7",
"text": "This article tests expectations generated by the veto players theory with respect to the over time composition of budgets in a multidimensional policy space. The theory predicts that countries with many veto players (i.e., coalition governments, bicameral political systems, presidents with veto) will have difficulty altering the budget structures. In addition, countries that tend to make significant shifts in government composition will have commensurate modifications of the budget. Data collected from 19 advanced industrialized countries from 1973 to 1995 confirm these expectations, even when one introduces socioeconomic controls for budget adjustments like unemployment variations, size of retired population and types of government (minimum winning coalitions, minority or oversized governments). The methodological innovation of the article is the use of empirical indicators to operationalize the multidimensional policy spaces underlying the structure of budgets. The results are consistent with other analyses of macroeconomic outcomes like inflation, budget deficits and taxation that are changed at a slower pace by multiparty governments. The purpose of this article is to test empirically the expectations of the veto players theory in a multidimensional setting. The theory defines ‘veto players’ as individuals or institutions whose agreement is required for a change of the status quo. The basic prediction of the theory is that when the number of veto players and their ideological distances increase, policy stability also increases (only small departures from the status quo are possible) (Tsebelis 1995, 1999, 2000, 2002). The theory was designed for the study of unidimensional and multidimensional policy spaces. While no policy domain is strictly unidimensional, existing empirical tests have only focused on analyzing political economy issues in a single dimension. These studies have confirmed the veto players theory’s expectations (see Bawn (1999) on budgets; Hallerberg & Basinger (1998) on taxes; Tsebelis (1999) on labor legislation; Treisman (2000) on inflation; Franzese (1999) on budget deficits). This article is the first attempt to test whether the predictions of the veto players theory hold in multidimensional policy spaces. We will study a phenomenon that cannot be considered unidimensional: the ‘structure’ of budgets – that is, their percentage composition, and the change in this composition over © European Consortium for Political Research 2004 Published by Blackwell Publishing Ltd., 9600 Garsington Road, Oxford, OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA",
"title": ""
},
{
"docid": "b1272039194d07ff9b7568b7f295fbfb",
"text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.",
"title": ""
}
] |
scidocsrr
|
961d888381ecee3b18856d6be0835344
|
Modeling and predicting behavioral dynamics on the web
|
[
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
}
] |
[
{
"docid": "8e520ad94c7555b9bb1546786b532adb",
"text": "We propose Machines Talking To Machines (M2M), a framework combining automation and crowdsourcing to rapidly bootstrap endto-end dialogue agents for goal-oriented dialogues in arbitrary domains. M2M scales to new tasks with just a task schema and an API client from the dialogue system developer, but it is also customizable to cater to task-specific interactions. Compared to the Wizard-of-Oz approach for data collection, M2M achieves greater diversity and coverage of salient dialogue flows while maintaining the naturalness of individual utterances. In the first phase, a simulated user bot and a domain-agnostic system bot converse to exhaustively generate dialogue “outlines”, i.e. sequences of template utterances and their semantic parses. In the second phase, crowd workers provide contextual rewrites of the dialogues to make the utterances more natural while preserving their meaning. The entire process can finish within a few hours. We propose a new corpus of 3,000 dialogues spanning 2 domains collected with M2M, and present comparisons with popular dialogue datasets on the quality and diversity of the surface forms and dialogue flows.",
"title": ""
},
{
"docid": "236dcb6dd7e04c0600c2f0b90f94c5dd",
"text": "Main call for Cloud computing is that users only utilize what they required and only pay for what they really use. Mobile Cloud Computing refers to an infrastructure where data processing and storage can happen away from mobile device. Portio research estimates that mobile subscribers worldwide will reach 6.9 billion by the end of 2013 and 8 billion by the end of 2016. Ericsson also forecasts that mobile subscriptions will reach 9 billion by 2017. Due to increasing use of mobile devices the requirement of cloud computing in mobile devices arise, which gave birth to Mobile Cloud Computing. Mobile devices do not need to have large storage capacity and powerful CPU speed. Due to storing data on cloud there is an issue of data security. Because of the risk associated with data storage many IT professionals are not showing their interest towards Mobile Cloud Computing. To ensure the correctness of users' data in the cloud, we propose an effective mechanism with salient feature of data integrity and confidentiality. This paper proposed a mechanism which uses the concept of RSA algorithm, Hash function along with several cryptography tools to provide better security to the data stored on the mobile cloud.",
"title": ""
},
{
"docid": "0dd0f44e59c1ee1e04d1e675dfd0fd9c",
"text": "An important first step to successful global marketing is to understand the similarities and dissimilarities of values between cultures. This task is particularly daunting for companies trying to do business with China because of the scarcity of research-based information. This study uses updated values of Hofstede’s (1980) cultural model to compare the effectiveness of Pollay’s advertising appeals between the U.S. and China. Nine of the twenty hypotheses predicting effective appeals based on cultural dimensions were supported. An additional hypothesis was significant, but in the opposite direction as predicted. These findings suggest that it would be unwise to use Hofstede’s cultural dimensions as a sole predictor for effective advertising appeals. The Hofstede dimensions may lack the currency and fine grain necessary to effectively predict the success of the various advertising appeals. Further, the effectiveness of advertising appeals may be moderated by other factors, such as age, societal trends, political-legal environment and product usage.",
"title": ""
},
{
"docid": "93fd668e65372cfa7b1b885b2fbca092",
"text": "Electroosmotic flow (EOF) is used to pump solutions through microfluidic devices and capillary electrophoresis columns. We describe here an EOF pump based on membrane EOF rectification, an electrokinetic phenomenon we recently described. EOF rectification requires membranes with asymmetrically shaped pores, and conical pores in a polymeric membrane were used here. We show here that solution flow through the membrane can be achieved by applying a symmetrical sinusoidal voltage waveform across the membrane. This is possible because the alternating current (AC) carried by ions through the pore is rectified, and we previously showed that rectified currents yield EOF rectification. We have investigated the effect of both the magnitude and frequency of the voltage waveform on flow rate through the membrane, and we have measured the maximum operating pressure. Finally, we show that operating in AC mode offers potential advantages relative to conventional DC-mode EOF pumps.",
"title": ""
},
{
"docid": "ada8c64a2e5c7be58a2200e8d1f64063",
"text": "Nitrogen-containing bioactive alkaloids of plant origin play a significant role in human health and medicine. Several semisynthetic antimitotic alkaloids are successful in anticancer drug development. Gloriosa superba biosynthesizes substantial quantities of colchicine, a bioactive molecule for gout treatment. Colchicine also has antimitotic activity, preventing growth of cancer cells by interacting with microtubules, which could lead to the design of better cancer therapeutics. Further, several colchicine semisynthetics are less toxic than colchicine. Research is being conducted on effective, less toxic colchicine semisynthetic formulations with potential drug delivery strategies directly targeting multiple solid cancers. This article reviews the dynamic state of anticancer drug development from colchicine semisynthetics and natural colchicine production and briefly discusses colchicine biosynthesis.",
"title": ""
},
{
"docid": "0923e899e5d7091a6da240db21eefad2",
"text": "A new method was developed to acquire images automatically at a series of specimen tilts, as required for tomographic reconstruction. The method uses changes in specimen position at previous tilt angles to predict the position at the current tilt angle. Actual measurement of the position or focus is skipped if the statistical error of the prediction is low enough. This method allows a tilt series to be acquired rapidly when conditions are good but falls back toward the traditional approach of taking focusing and tracking images when necessary. The method has been implemented in a program, SerialEM, that provides an efficient environment for data acquisition. This program includes control of an energy filter as well as a low-dose imaging mode, in which tracking and focusing occur away from the area of interest. The program can automatically acquire a montage of overlapping frames, allowing tomography of areas larger than the field of the CCD camera. It also includes tools for navigating between specimen positions and finding regions of interest.",
"title": ""
},
{
"docid": "cf2e23cddb72b02d1cca83b4c3bf17a8",
"text": "This article seeks to reconceptualize the relationship between flexibility and efficiency. Much organization theory argues that efficiency requires bureaucracy, that bureaucracy impedes flexibility, and that organizations therefore confront a tradeoff between efficiency and flexibility. Some researchers have challenged this line of reasoning, arguing that organizations can shift the efficiency/flexibility tradeoff to attain both superior efficiency and superior flexibility. Others have pointed out numerous obstacles to successfully shifting the tradeoff. Seeking to advance our understanding of these obstacles and how they might be overcome, we analyze an auto assembly plant that appears to be far above average industry performance in both efficiency and flexibility. NUMMI, a Toyota subsidiary located in Fremont, California, relied on a highly bureaucratic organization to achieve its high efficiency. Analyzing two recent major model changes, we find that NUMMI used four mechanisms to support its exceptional flexibility/efficiency combination. First, metaroutines (routines for changing other routines) facilitated the efficient performance of nonroutine tasks. Second, both workers and suppliers contributed to nonroutine tasks while they worked in routine production. Third, routine and nonroutine tasks were separated temporally, and workers switched sequentially between them. Finally, novel forms of organizational partitioning enabled differentiated subunits to work in parallel on routine and nonroutine tasks. NUMMI’s success with these four mechanisms depended on several features of the broader organizational context, most notably training, trust, and leadership. (Flexibility; Bureaucracy; Tradeoffs; Routines; Metaroutines; Ambidexterity; Switching; Partitioning; Trust) Introduction The postulate of a tradeoff between efficiency and flexibility is one of the more enduring ideas in organizational theory. Thompson (1967, p. 15) described it as a central “paradox of administration.” Managers must choose between organization designs suited to routine, repetitive tasks and those suited to nonroutine, innovative tasks. However, as competitive rivalry intensifies, a growing number of firms are trying to improve simultaneously in efficiencyand flexibility-related dimensions (de Meyer et al. 1989, Volberda 1996, Organization Science 1996). How can firms shift the terms of the efficiency-flexibility tradeoff? To explore how firms can create simultaneously superior efficiency and superior flexibility, we examine an exceptional auto assembly plant, NUMMI, a joint venture of Toyota and GM whose day-to-day operations were unD ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? 44 ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 der Toyota control. Like other Japanese auto transplants in the U.S., NUMMI far outpaced its Big Three counterparts simultaneously in efficiency and quality and in model change flexibility (Womack et al. 1990, Business Week 1994). In the next section we set the theoretical stage by reviewing prior research on the efficiency/flexibility tradeoff. Prior research suggests four mechanisms by which organizations can shift the tradeoff as well as some potentially serious impediments to each mechanism. We then describe our research methods and the NUMMI organization. The following sections first outline in summary form the results of this investigation, then provide the supporting evidence in our analysis of two major model changeovers at NUMMI and how they differed from traditional U.S. Big Three practice. A discussion section identifies some conditions underlying NUMMI’s success in shifting the tradeoff and in overcoming the potential impediments to the four trade-off shifting mechanisms. Flexibility Versus Efficiency? There are many kinds of flexibility and indeed a sizable literature devoted to competing typologies of the various kinds of flexibility (see overview by Sethi and Sethi 1990). However, from an organizational point of view, all forms of flexibility present a common challenge: efficiency requires a bureaucratic form of organization with high levels of standardization, formalization, specialization, hierarchy, and staffs; but these features of bureaucracy impede the fluid process of mutual adjustment required for flexibility; and organizations therefore confront a tradeoff between efficiency and flexibility (Knott 1996, Kurke 1988). Contingency theory argues that organizations will be more effective if they are designed to fit the nature of their primary task. Specifically, organizations should adopt a mechanistic form if their task is simple and stable and their goal is efficiency, and they should adopt an organic form if their task is complex and changing and their goal is therefore flexibility (Burns and Stalker 1961). Organizational theory presents a string of contrasts reflecting this mechanistic/organic polarity: machine bureaucracies vs. adhocracies (Mintzberg 1979); adaptive learning based on formal rules and hierarchical controls versus generative learning relying on shared values, teams, and lateral communication (McGill et al. 1992); generalists who pursue opportunistic r-strategies and rely on excess capacity to do well in open environments versus specialists that are more likely to survive in competitive environments by pursuing k-strategies that trade less flexibility for greater efficiency (Hannan and Freeman 1977, 1989). March (1991) and Levinthal and March (1993) make the parallel argument that organizations must choose between structures that facilitate exploration—the search for new knowledge—and those that facilitate exploitation—the use of existing knowledge. Social-psychological theories provide a rationale for this polarization. Merton (1958) shows how goal displacement in bureaucratic organizations generates rigidity. Argyris and Schon (1978) show how defensiveness makes single-loop learning—focused on pursuing given goals more effectively (read: efficiency)—an impediment to double-loop learning—focused on defining new task goals (read: flexibility). Thus, argues Weick (1969), adaptation precludes adaptability. This tradeoff view has been echoed in other disciplines. Standard economic theory postulates a tradeoff between flexibility and average costs (e.g., Stigler 1939, Hart 1942). Further extending this line of thought, Klein (1984) contrasts static and dynamic efficiency. Operations management researchers have long argued that productivity and flexibility or innovation trade off against each other in manufacturing plant performance (Abernathy 1978; see reviews by Gerwin 1993, Suárez et al. 1996, Corrêa 1994). Hayes and Wheelwright’s (1984) product/process matrix postulates a close correspondence between product variety and process efficiency (see Safizadeh et al. 1996). Strategy researchers such as Ghemawat and Costa (1993) argue that firms must chose between a strategy of dynamic effectiveness through flexibility and static efficiency through more rigid discipline. In support of a key corollary of the tradeoff postulate articulated in the organization theory literature, they argue that in general the optimal choice is at one end or the other of the spectrum, since a firm pursuing both goals simultaneously would have to mix organizational elements appropriate to each strategy and thus lose the benefit of the complementarities that typically obtain between the various elements of each type of organization. They would thus be “stuck in the middle” (Porter 1980). Beyond the Tradeoff? Empirical evidence for the tradeoff postulate is, however, remarkably weak. Take, for example, product mix flexibility. On the one hand, Hayes and Wheelwright (1984) and Skinner (1985) provide anecdotal evidence that more focused factories—ones producing a narrower range of products—are more efficient. In their survey of plants across a range of manufacturing industries, Safizadeh et al. (1996) confirmed that in general more product variety was associated with reliance on job-shop rather continuous processes. D ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 45 On the other hand, Kekre and Srinivasan’s (1990) study of companies selling industrial products found that a broader product line was significantly associated with lower manufacturing costs. MacDuffie et al. (1996) found that greater product variety had no discernible affect on auto assembly plant productivity. Suárez et al. (1996) found that product mix flexibility had no discernible relationship to costs or quality in printed circuit board assembly. Brush and Karnani (1996) found only three out of 19 manufacturing industries showed statistically significant productivity returns to narrower product lines, while two industries showed significant returns to broader product lines. Research by Fleischman (1996) on employment flexibility revealed a similar pattern: within 2digit SIC code industries that face relatively homogeneous levels of expected volatility of employment, the employment adjustment costs of the least flexible 4-digit industries were anywhere between 4 and 10 times greater than the adjustment costs found in the most flexible 4digit industries. Some authors argue that the era of tradeoffs is behind us (Ferdows and de Meyer 1990). Hypercompetitive environments force firms to compete on several dimensions at once (Organization Science 1996), and flexible technologies enable firms to shift the tradeoff curve just as quickly as they could move to a different point on the existing tr",
"title": ""
},
{
"docid": "c692dd35605c4af62429edef6b80c121",
"text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.",
"title": ""
},
{
"docid": "f60c0c53b83fb1f1bdd68b0f3d1051c9",
"text": "Television (TV), the predominant advertising medium, is being transformed by the micro-targeting capabilities of set-top boxes (STBs). By procuring impressions at the STB level (often denoted programmatic television), advertisers can now lower per-exposure costs and/or reach viewers most responsive to advertising creatives. Accordingly, this paper uses a proprietary, household-level, single-source data set to develop an instantaneous show and advertisement viewing model to forecast consumers’ exposure to advertising and the downstream consequences for impressions and sales. Viewing data suggest person-specific factors dwarf brandor show-specific factors in explaining advertising avoidance, thereby suggesting that device-level advertising targeting can be more effective than existing show-level targeting. Consistent with this observation, the model indicates that microtargeting lowers advertising costs and raises incremental profits considerably relative to show-level targeting. Further, these advantages are amplified when advertisers are allowed to buy real-time as opposed to up-front.",
"title": ""
},
{
"docid": "9e8a0adda8d52a2df1515be369dc4a49",
"text": "The past decade has seen the rapid development of technology in online learning. Its development has led many researchers to investigate the use of Web 2.0, where learning is present not only in the four-walls of the classroom. This paper, therefore, reports on students‟ experiences using a Web 2.0 tool, namely Edmodo. In particular, the study aims at identifying their perceptions of using the platform in language learning and their views on the possibility of using it to supplement face-to-face discussions in English language classes. The study involved 24 samples who undergone focus group interview as the method of collecting data. In general, results of the study revealed mixed reviews, where some students agreed with the use of Edmodo while others expressed negative opinions towards its use. In addition, two broad themes emerged from analysis of the data. These findings have significant implications in that the teacher, in the first place, needs to be equipped with the knowledge of using the platform to benefit the students. Nevertheless, this research extends our knowledge for the understanding of how to integrate Web 2.0 technologies in language classes.",
"title": ""
},
{
"docid": "d2951716e0c76499cd7089d917ddc1a6",
"text": "Research in Fine-Grained Visual Classification has focused on tackling the variations in pose, lighting, and viewpoint using sophisticated localization and segmentation techniques, and the usage of robust texture features to improve performance. In this work, we look at the fundamental optimization of neural network training for fine-grained classification tasks with minimal inter-class variance, and attempt to learn features with increased generalization to prevent overfitting. We introduce Training-with-Confusion, an optimization procedure for fine-grained classification tasks that regularizes training by introducing confusion in activations. Our method can be generalized to any fine-tuning task; it is robust to the presence of small training sets and label noise; and adds no overhead to the prediction time. We find that Training-with-Confusion improves the state-of-the-art on all major fine-grained classification datasets.",
"title": ""
},
{
"docid": "a078933ffbb2f0488b3b425b78fb7dd0",
"text": "Semantic Role Labeling (SRL) has become one of the standard tasks of natural language processing and proven useful as a source of information for a number of other applications. We address the problem of transferring an SRL model from one language to another using a shared feature representation. This approach is then evaluated on three language pairs, demonstrating competitive performance as compared to a state-of-the-art unsupervised SRL system and a cross-lingual annotation projection baseline. We also consider the contribution of different aspects of the feature representation to the performance of the model and discuss practical applicability of this method. 1 Background and Motivation Semantic role labeling has proven useful in many natural language processing tasks, such as question answering (Shen and Lapata, 2007; Kaisser and Webber, 2007), textual entailment (Sammons et al., 2009), machine translation (Wu and Fung, 2009; Liu and Gildea, 2010; Gao and Vogel, 2011) and dialogue systems (Basili et al., 2009; van der Plas et al., 2009). Multiple models have been designed to automatically predict semantic roles, and a considerable amount of data has been annotated to train these models, if only for a few more popular languages. As the annotation is costly, one would like to leverage existing resources to minimize the human effort required to construct a model for a new language. A number of approaches to the construction of semantic role labeling models for new languages have been proposed. On one end of the scale is unsupervised SRL, such as Grenager and Manning (2006), which requires some expert knowledge, but no labeled data. It clusters together arguments that should bear the same semantic role, but does not assign a particular role to each cluster. On the other end is annotating a new dataset from scratch. There are also intermediate options, which often make use of similarities between languages. This way, if an accurate model exists for one language, it should help simplify the construction of a model for another, related language. The approaches in this third group often use parallel data to bridge the gap between languages. Cross-lingual annotation projection systems (Padó and Lapata, 2009), for example, propagate information directly via word alignment links. However, they are very sensitive to the quality of parallel data, as well as the accuracy of a sourcelanguage model on it. An alternative approach, known as cross-lingual model transfer, or cross-lingual model adaptation, consists of modifying a source-language model to make it directly applicable to a new language. This usually involves constructing a shared feature representation across the two languages. McDonald et al. (2011) successfully apply this idea to the transfer of dependency parsers, using part-ofspeech tags as the shared representation of words. A later extension of Täckström et al. (2012) enriches this representation with cross-lingual word clusters, considerably improving the performance. In the case of SRL, a shared representation that is purely syntactic is likely to be insufficient, since structures with different semantics may be realized by the same syntactic construct, for example “in August” vs “in Britain”. However with the help of recently introduced cross-lingual word represen-",
"title": ""
},
{
"docid": "f0042c2198c3ddc99dc30df15da6dd6e",
"text": "A fully integrated 24-GHz CMOS ultra-wideband (UWB) radar transmitter for short-range automotive application is presented. For high-range resolution and improved signal-to-noise ratio, a pulse compression technique using binary phase code is adopted. Design issues of UWB radar transmitter are investigated based on fundamental pulse theory. A pulse former, which operates as a switch to generate a pulse modulated carrier signal and a bi-phase modulator for pulse compression, is proposed. The proposed transmitter achieves 4-GHz output signal bandwidth, which means a minimum range resolution of 7.5 cm, and the total dc power dissipation is 63 mW.",
"title": ""
},
{
"docid": "41eddeb86d561882b85895277cbd38e9",
"text": "With the rapid growth of data traffic in data centers, data rates over 50Gb/s/signal (e.g., OIF-CEI-56G-VSR) will eventually be required in wireline chip-to-module or chip-to-chip communications [1-3]. To achieve better power efficiency than that of existing 25Gb/s/signal designs, a high-speed yet energy-efficient front-end is needed in both the transmitter and receiver. A receiver front-end with baud-rate architecture [1] has been successfully operated at 56Gb/s, but additional components such as eye-monitoring comparators, phase detectors, and clock recovery circuitry as well as a power-efficient transmitter are needed to build a complete transceiver.",
"title": ""
},
{
"docid": "bfdcad076ec599716de7d2dc43323059",
"text": "The strategic management of agricultural lands involves crop field monitoring each year. Crop discrimination via remote sensing is a complex task, especially if different crops have a similar spectral response and cropping pattern. In such cases, crop identification could be improved by combining object-based image analysis and advanced machine learning methods. In this investigation, we evaluated the C4.5 decision tree, logistic regression (LR), support vector machine (SVM) and multilayer perceptron (MLP) neural network methods, both as single classifiers and combined in a hierarchical classification, for the mapping of nine major summer crops (both woody and herbaceous) from ASTER satellite images captured in two different dates. Each method was built with different combinations of spectral and textural features obtained after the segmentation of the remote images in an object-based framework. As single classifiers, MLP and SVM obtained maximum overall accuracy of 88%, slightly higher than LR (86%) and notably higher than C4.5 (79%). The SVM+SVM classifier (best method) improved these results to 89%. In most cases, the hierarchical classifiers considerably increased the accuracy of the most poorly classified class (minimum sensitivity). The SVM+SVM method offered a significant improvement in classification accuracy for all of the studied crops compared to OPEN ACCESS Remote Sens. 2014, 6 5020 the conventional decision tree classifier, ranging between 4% for safflower and 29% for corn, which suggests the application of object-based image analysis and advanced machine learning methods in complex crop classification tasks.",
"title": ""
},
{
"docid": "dc310f1a5fb33bd3cbe9de95b2a0159c",
"text": "The MYO armband from Thalmic Labs is a complete and wireless motion and muscle sensing platform. This paper evaluates the armband’s sensors and its potential for NIME applications. This is followed by a presentation of the prototype instrument MuMYO. We conclude that, despite some shortcomings, the armband has potential of becoming a new “standard” controller in the NIME community.",
"title": ""
},
{
"docid": "12f4242c16c1d73fded4cb32ccc938ea",
"text": "Cloud Computing is a form of distributed computing wherein resources and application platforms are distributed over the Internet through on demand and pay on utilization basis. Data Storage is main feature that cloud data centres are provided to the companies/organizations to preserve huge data. But still few organizations are not ready to use cloud technology due to lack of security. This paper describes the different techniques along with few security challenges, advantages and also disadvantages. It also provides the analysis of data security issues and privacy protection affairs related to cloud computing by preventing data access from unauthorized users, managing sensitive data, providing accuracy and consistency of data stored.",
"title": ""
},
{
"docid": "051d402ce90d7d326cc567e228c8411f",
"text": "CDM ESD event has become the main ESD reliability concern for integrated-circuits products using nanoscale CMOS technology. A novel CDM ESD protection design, using self-biased current trigger (SBCT) and source pumping, has been proposed and successfully verified in 0.13-lm CMOS technology to achieve 1-kV CDM ESD robustness. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ccb5a426e9636186d2819f34b5f0d5e8",
"text": "MOTIVATION\nThe discovery of regulatory pathways, signal cascades, metabolic processes or disease models requires knowledge on individual relations like e.g. physical or regulatory interactions between genes and proteins. Most interactions mentioned in the free text of biomedical publications are not yet contained in structured databases.\n\n\nRESULTS\nWe developed RelEx, an approach for relation extraction from free text. It is based on natural language preprocessing producing dependency parse trees and applying a small number of simple rules to these trees. We applied RelEx on a comprehensive set of one million MEDLINE abstracts dealing with gene and protein relations and extracted approximately 150,000 relations with an estimated performance of both 80% precision and 80% recall.\n\n\nAVAILABILITY\nThe used natural language preprocessing tools are free for use for academic research. Test sets and relation term lists are available from our website (http://www.bio.ifi.lmu.de/publications/RelEx/).",
"title": ""
},
{
"docid": "35d9cfbb5f0b2623ce83973ae3235c74",
"text": "Text entry has been a bottleneck of nontraditional computing devices. One of the promising methods is the virtual keyboard for touch screens. Correcting previous estimates on virtual keyboard efficiency in the literature, we estimated the potential performance of the existing QWERTY, FITALY, and OPTI designs of virtual keyboards to be in the neighborhood of 28, 36, and 38 words per minute (wpm), respectively. This article presents 2 quantitative design techniques to search for virtual keyboard layouts. The first technique simulated the dynamics of a keyboard with digraph springs between keys, which produced a Hooke keyboard with 41.6 wpm movement efficiency. The second technique used a Metropolis random walk algorithm guided by a “Fitts-digraph energy” objective function that quantifies the movement efficiency of a virtual keyboard. This method produced various Metropolis keyboards with different HUMAN-COMPUTER INTERACTION, 2002, Volume 17, pp. 89–XXX Copyright © 2002, Lawrence Erlbaum Associates, Inc. Shumin Zhai is a human–computer interaction researcher with an interest in inventing and analyzing interaction methods and devices based on human performance insights and experimentation; he is a Research Staff Member in the User Sciences and Experience Research Department of the IBM Almaden Research Center. Michael Hunter is a graduate student of Computer Science at Brigham Young University; he is interested in designing graphical and haptic user interfaces. Barton A. Smith is an experimental scientist with an interest in machines, people, and society; he is manager of the Human Interface Research Group at the IBM Almaden Research Center. shapes and structures with approximately 42.5 wpm movement efficiency, which was 50% higher than QWERTY and 10% higher than OPTI. With a small reduction (41.16 wpm) of movement efficiency, we introduced 2 more design objectives that produced the ATOMIK layout. One was alphabetical tuning that placed the keys with a tendency from A to Z so a novice user could more easily locate the keys. The other was word connectivity enhancement so the most frequent words were easier to find, remember, and type.",
"title": ""
}
] |
scidocsrr
|
9feb0d3750b4d5da6182e3264f8cedab
|
Depth Silhouettes Context: A New Robust Feature for Human Tracking and Activity Recognition based on Advanced Hidden Markov Model
|
[
{
"docid": "fa440af1d9ec65caf3cd37981919b56e",
"text": "We present a method for spotting sporadically occurring gestures in a continuous data stream from body-worn inertial sensors. Our method is based on a natural partitioning of continuous sensor signals and uses a two-stage approach for the spotting task. In a first stage, signal sections likely to contain specific motion events are preselected using a simple similarity search. Those preselected sections are then further classified in a second stage, exploiting the recognition capabilities of hidden Markov models. Based on two case studies, we discuss implementation details of our approach and show that it is a feasible strategy for the spotting of various types of motion events. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "37aca8c5ec945d4a91984683538b0bc6",
"text": "Little is known about the neurobiological mechanisms underlying prosocial decisions and how they are modulated by social factors such as perceived group membership. The present study investigates the neural processes preceding the willingness to engage in costly helping toward ingroup and outgroup members. Soccer fans witnessed a fan of their favorite team (ingroup member) or of a rival team (outgroup member) experience pain. They were subsequently able to choose to help the other by enduring physical pain themselves to reduce the other's pain. Helping the ingroup member was best predicted by anterior insula activation when seeing him suffer and by associated self-reports of empathic concern. In contrast, not helping the outgroup member was best predicted by nucleus accumbens activation and the degree of negative evaluation of the other. We conclude that empathy-related insula activation can motivate costly helping, whereas an antagonistic signal in nucleus accumbens reduces the propensity to help.",
"title": ""
},
{
"docid": "6dc4cefb15977ba4b4f33f7ce792196a",
"text": "Fuel cells convert chemical energy directly into electrical energy with high efficiency and low emission of pollutants. However, before fuel-cell technology can gain a significant share of the electrical power market, important issues have to be addressed. These issues include optimal choice of fuel, and the development of alternative materials in the fuel-cell stack. Present fuel-cell prototypes often use materials selected more than 25 years ago. Commercialization aspects, including cost and durability, have revealed inadequacies in some of these materials. Here we summarize recent progress in the search and development of innovative alternative materials.",
"title": ""
},
{
"docid": "432e7ae2e76d76dbb42d92cd9103e3d2",
"text": "Previous work has used monolingual parallel corpora to extract and generate paraphrases. We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. Using alignment techniques from phrasebased statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments.",
"title": ""
},
{
"docid": "f6f045cad34d50eea8517ee9fbb3da57",
"text": "The increasing rate of high (secondary) school leavers choosing academic majors to study at the university without proper guidance has most times left students with unfavorable consequences including low grades, extra year(s), the need to switch programs and ultimately having to withdraw from the university. In a bid to proffer a solution to the issue, this research aims to build an expert system that recommends university or academic majors to high school students in developing countries where there is a dearth of human career counselors. This is to reduce the adverse effects caused as a result of wrong choices made by students. A mobile rule-based expert system supported with ontology was developed for easy accessibility by the students.",
"title": ""
},
{
"docid": "6f30153ddb49d6cec554dbec53f0ad0e",
"text": "Recommendations can greatly benefit from good representations of the user state at recommendation time. Recent approaches that leverage Recurrent Neural Networks (RNNs) for session-based recommendations have shown that Deep Learning models can provide useful user representations for recommendation. However, current RNN modeling approaches summarize the user state by only taking into account the sequence of items that the user has interacted with in the past, without taking into account other essential types of context information such as the associated types of user-item interactions, the time gaps between events and the time of day for each interaction. To address this, we propose a new class of Contextual Recurrent Neural Networks for Recommendation (CRNNs) that can take into account the contextual information both in the input and output layers and modifying the behavior of the RNN by combining the context embedding with the item embedding and more explicitly, in the model dynamics, by parametrizing the hidden unit transitions as a function of context information. We compare our CRNNs approach with RNNs and non-sequential baselines and show good improvements on the next event prediction task.",
"title": ""
},
{
"docid": "fb3002fff98d4645188910989638af69",
"text": "Stress is important in substance use disorders (SUDs). Mindfulness training (MT) has shown promise for stress-related maladies. No studies have compared MT to empirically validated treatments for SUDs. The goals of this study were to assess MT compared to cognitive behavioral therapy (CBT) in substance use and treatment acceptability, and specificity of MT compared to CBT in targeting stress reactivity. Thirty-six individuals with alcohol and/or cocaine use disorders were randomly assigned to receive group MT or CBT in an outpatient setting. Drug use was assessed weekly. After treatment, responses to personalized stress provocation were measured. Fourteen individuals completed treatment. There were no differences in treatment satisfaction or drug use between groups. The laboratory paradigm suggested reduced psychological and physiological indices of stress during provocation in MT compared to CBT. This pilot study provides evidence of the feasibility of MT in treating SUDs and suggests that MT may be efficacious in targeting stress.",
"title": ""
},
{
"docid": "86cdce8b04818cc07e1003d85305bd40",
"text": "Balanced graph partitioning is a well known NP-complete problem with a wide range of applications. These applications include many large-scale distributed problems including the optimal storage of large sets of graph-structured data over several hosts-a key problem in today's Cloud infrastructure. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because they typically involve frequent global operations over the entire graph. In this paper, we propose a fully distributed algorithm, called JA-BE-JA, that uses local search and simulated annealing techniques for graph partitioning. The algorithm is massively parallel: there is no central coordination, each node is processed independently, and only the direct neighbors of the node, and a small subset of random nodes in the graph need to be known locally. Strict synchronization is not required. These features allow JA-BE-JA to be easily adapted to any distributed graph-processing system from data centers to fully distributed networks. We perform a thorough experimental analysis, which shows that the minimal edge-cut value achieved by JA-BE-JA is comparable to state-of-the-art centralized algorithms such as METIS. In particular, on large social networks JA-BEJA outperforms METIS, which makes JA-BE-JA-a bottom-up, self-organizing algorithm-a highly competitive practical solution for graph partitioning.",
"title": ""
},
{
"docid": "6c1317ef88110756467a10c4502851bb",
"text": "Deciding query equivalence is an important problem in data management with many practical applications. Solving the problem, however, is not an easy task. While there has been a lot of work done in the database research community in reasoning about the semantic equivalence of SQL queries, prior work mainly focuses on theoretical limitations. In this paper, we present COSETTE, a fully automated prover that can determine the equivalence of SQL queries. COSETTE leverages recent advances in both automated constraint solving and interactive theorem proving, and returns a counterexample (in terms of input relations) if two queries are not equivalent, or a proof of equivalence otherwise. Although the problem of determining equivalence for arbitrary SQL queries is undecidable, our experiments show that COSETTE can determine the equivalences of a wide range of queries that arise in practice, including conjunctive queries, correlated queries, queries with outer joins, and queries with aggregates. Using COSETTE, we have also proved the validity of magic set rewrites, and confirmed various real-world query rewrite errors, including the famous COUNT bug. We are unaware of any prior tool that can automatically determine the equivalences of a broad range of queries as COSETTE, and believe that our tool represents a major step towards building provably-correct query optimizers for real-world database systems.",
"title": ""
},
{
"docid": "bc388488c5695286fe7d7e56ac15fa94",
"text": "In this paper a new parking guiding and information system is described. The system assists the user to find the most suitable parking space based on his/her preferences and learned behavior. The system takes into account parameters such as driver's parking duration, arrival time, destination, type preference, cost preference, driving time, and walking distance as well as time-varying parking rules and pricing. Moreover, a prediction algorithm is proposed to forecast the parking availability for different parking locations for different times of the day based on the real-time parking information, and previous parking availability/occupancy data. A novel server structure is used to implement the system. Intelligent parking assist system reduces the searching time for parking spots in urban environments, and consequently leads to a reduction in air pollutions and traffic congestion. On-street parking meters, off-street parking garages, as well as free parking spaces are considered in our system.",
"title": ""
},
{
"docid": "3f657657a24c03038bd402498b7abddd",
"text": "We propose a system for real-time animation of eyes that can be interactively controlled in a WebGL enabled device using a small number of animation parameters, including gaze. These animation parameters can be obtained using traditional keyframed animation curves, measured from an actor's performance using off-the-shelf eye tracking methods, or estimated from the scene observed by the character, using behavioral models of human vision. We present a model of eye movement, that includes not only movement of the globes, but also of the eyelids and other soft tissues in the eye region. The model includes formation of expression wrinkles in soft tissues. To our knowledge this is the first system for real-time animation of soft tissue movement around the eyes based on gaze input.",
"title": ""
},
{
"docid": "b2124dfd12529c1b72899b9866b34d03",
"text": "In today's world, the amount of stored information has been enormously increasing day by day which is generally in the unstructured form and cannot be used for any processing to extract useful information, so several techniques such as summarization, classification, clustering, information extraction and visualization are available for the same which comes under the category of text mining. Text Mining can be defined as a technique which is used to extract interesting information or knowledge from the text documents. Text mining, also known as text data mining or knowledge discovery from textual databases, refers to the process of extracting interesting and non-trivial patterns or knowledge from text documents. Regarded by many as the next wave of knowledge discovery, text mining has very high commercial values.",
"title": ""
},
{
"docid": "ef81266ae8c2023ea35dca8384db3803",
"text": "Linked Open Data has been recognized as a useful source of background knowledge for building content-based recommender systems. Vast amount of RDF data, covering multiple domains, has been published in freely accessible datasets. In this paper, we present an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs used for building content-based recommender system. We generate sequences by leveraging local information from graph sub-structures and learn latent numerical representations of entities in RDF graphs. Our evaluation on two datasets in the domain of movies and books shows that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be effectively used in content-based recommender systems.",
"title": ""
},
{
"docid": "80504ceedad8eb61c55d6b3aea91b97e",
"text": "In this paper, an airport departure scheduling tool for aircraft is presented based on constraint satisfaction techniques. Airports are getting more and more congested with the available runway configuration as one of the most constraining factors. A possibility to alleviate this congestion is to assist controllers in the planning and scheduling process of aircraft. The prototype presented here is aimed to offer such assistance in the establishment of an optimal departure schedule and the planning of initial climb phases for departing aircraft. This goal is accomplished by modelling the scheduling problem as a constraint satisfaction problem, using ILOG Solver and Scheduler as an implementation environment.",
"title": ""
},
{
"docid": "c53021193518ebdd7006609463bafbcc",
"text": "BACKGROUND AND OBJECTIVES\nSleep is important to child development, but there is limited understanding of individual developmental patterns of sleep, their underlying determinants, and how these influence health and well-being. This article explores the presence of various sleep patterns in children and their implications for health-related quality of life.\n\n\nMETHODS\nData were collected from the Longitudinal Study of Australian Children. Participants included 2926 young children followed from age 0 to 1 years to age 6 to 7 years. Data on sleep duration were collected every 2 years, and covariates (eg, child sleep problems, maternal education) were assessed at baseline. Growth mixture modeling was used to identify distinct longitudinal patterns of sleep duration and significant covariates. Linear regression examined whether the distinct sleep patterns were significantly associated with health-related quality of life.\n\n\nRESULTS\nThe results identified 4 distinct sleep duration patterns: typical sleepers (40.6%), initially short sleepers (45.2%), poor sleepers (2.5%), and persistent short sleepers (11.6%). Factors such as child sleep problems, child irritability, maternal employment, household financial hardship, and household size distinguished between the trajectories. The results demonstrated that the trajectories had different implications for health-related quality of life. For instance, persistent short sleepers had poorer physical, emotional, and social health than typical sleepers.\n\n\nCONCLUSIONS\nThe results provide a novel insight into the nature of child sleep and the implications of differing sleep patterns for health-related quality of life. The findings could inform the development of effective interventions to promote healthful sleep patterns in children.",
"title": ""
},
{
"docid": "be8864d6fb098c8a008bfeea02d4921a",
"text": "Active testing has recently been introduced to effectively test concurrent programs. Active testing works in two phases. It first uses predictive off-the-shelf static or dynamic program analyses to identify potential concurrency bugs, such as data races, deadlocks, and atomicity violations. In the second phase, active testing uses the reports from these predictive analyses to explicitly control the underlying scheduler of the concurrent program to accurately and quickly discover real concurrency bugs, if any, with very high probability and little overhead. In this paper, we present an extensible framework for active testing of Java programs. The framework currently implements three active testers based on data races, atomic blocks, and deadlocks.",
"title": ""
},
{
"docid": "4f5f128195592fe881269f54fd3424e7",
"text": "In this research, a new method is proposed for the optimization of warship spare parts stock with genetic algorithm. Warships should fulfill her duties in all circumstances. Considering the warships have more than a hundred thousand unique parts, it is a very hard problem to decide which spare parts should be stocked at warehouse aiming to use in case of failure. In this study, genetic algorithm that is a heuristic optimization method is used to solve this problem. The demand quantity, the criticality and the cost of parts is used for optimization. A genetic algorithm with very long chromosome is used, i.e. over 1000 genes in one chromosome. The outputs of the method is analyzed and compared with the Price Sensitive 0.5 FLSIP+ model, which is widely used over navies, and came to a conclusion that the proposed method is better.",
"title": ""
},
{
"docid": "050dd71858325edd4c1a42fc1a25de95",
"text": "This paper presents Disco, a prototype for supporting knowledge workers in exploring, reviewing and sorting collections of textual data. The goal is to facilitate, accelerate and improve the discovery of information. To this end, it combines Semantic Relatedness techniques with a review workflow developed in a tangible environment. Disco uses a semantic model that is leveraged on-line in the course of search sessions, and accessed through natural hand-gesture, in a simple and intuitive way.",
"title": ""
},
{
"docid": "ae5bf888ce9a61981be60b9db6fc2d9c",
"text": "Inverting the hash values by performing brute force computation is one of the latest security threats on password based authentication technique. New technologies are being developed for brute force computation and these increase the success rate of inversion attack. Honeyword base authentication protocol can successfully mitigate this threat by making password cracking detectable. However, the existing schemes have several limitations like Multiple System Vulnerability, Weak DoS Resistivity, Storage Overhead, etc. In this paper we have proposed a new honeyword generation approach, identified as Paired Distance Protocol (PDP) which overcomes almost all the drawbacks of previously proposed honeyword generation approaches. The comprehensive analysis shows that PDP not only attains a high detection rate of 97.23% but also reduces the storage cost to a great extent.",
"title": ""
},
{
"docid": "e8b486ce556a0193148ffd743661bce9",
"text": "This chapter presents the fundamentals and applications of the State Machine Replication (SMR) technique for implementing consistent fault-tolerant services. Our focus here is threefold. First we present some fundamentals about distributed computing and three “practical” SMR protocols for different fault models. Second, we discuss some recent work aiming to improve the performance, modularity and robustness of SMR protocols. Finally, we present some prominent applications for SMR and an example of the real code needed for implementing a dependable service using the BFT-SMART replication library.",
"title": ""
},
{
"docid": "0ef117ca4663f523d791464dad9a7ebf",
"text": "In this paper, a circularly polarized, omnidirectional side-fed bifilar helix antenna, which does not require a ground plane is presented. The antenna has a height of less than 0.1λ and the maximum boresight gain of 1.95dB, with 3dB beamwidth of 93°. The impedance bandwidth of the antenna for VSWR≤2 (with reference to resonant input resistance of 25Ω) is 2.7%. The simulated axial ratio(AR) at the resonant frequency 860MHz is 0.9 ≤AR≤ 1.0 in the whole hemisphere except small region around the nulls. The polarization bandwidth for AR≤3dB is 34.7%. The antenna is especially useful for high speed aerodynamic bodies made of composite materials (such as UAVs) where low profile antennas are essential to reduce air resistance and/or proper metallic ground is not available for monopole-type antenna.",
"title": ""
}
] |
scidocsrr
|
6c3fdeae4358c225363f50e856a465a2
|
Bayesian Affect Control Theory
|
[
{
"docid": "3ef6a2d1c125d5c7edf60e3ceed23317",
"text": "This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent’s belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, MonteCarlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 × 10 battleship and partially observable PacMan, with approximately 10 and 10 states respectively. Our MonteCarlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.",
"title": ""
},
{
"docid": "31a1a5ce4c9a8bc09cbecb396164ceb4",
"text": "In trying out this hypothesis we shall understand by attitude the positive or negative relationship of a person p to another person o or to an impersonal entity x which may be a situation, an event, an idea, or a thing, etc. Examples are: to like, to love, to esteem, to value, and their opposites. A positive relation of this kind will be written L, a negative one ~L. Thus, pLo means p likes, loves, or values o, or, expressed differently, o is positive for p.",
"title": ""
}
] |
[
{
"docid": "7b6cf139cae3e9dae8a2886ddabcfef0",
"text": "An enhanced automated material handling system (AMHS) that uses a local FOUP buffer at each tool is presented as a method of enabling lot size reduction and parallel metrology sampling in the photolithography (litho) bay. The local FOUP buffers can be integrated with current OHT AMHS systems in existing fabs with little or no change to the AMHS or process equipment. The local buffers enhance the effectiveness of the OHT by eliminating intermediate moves to stockers, increasing the move rate capacity by 15-20%, and decreasing the loadport exchange time to 30 seconds. These enhancements can enable the AMHS to achieve the high move rates compatible with lot size reduction down to 12-15 wafers per FOUP. The implementation of such a system in a photolithography bay could result in a 60-74% reduction in metrology delay time, which is the time between wafer exposure at a litho tool and collection of metrology and inspection data.",
"title": ""
},
{
"docid": "f5e3014f479556cde21321cf1ce8f9e3",
"text": "Physiological signals are widely used to perform medical assessment for monitoring an extensive range of pathologies, usually related to cardio-vascular diseases. Among these, both PhotoPlethysmoGraphy (PPG) and Electrocardiography (ECG) signals are those more employed. PPG signals are an emerging non-invasive measurement technique used to study blood volume pulsations through the detection and analysis of the back-scattered optical radiation coming from the skin. ECG is the process of recording the electrical activity of the heart over a period of time using electrodes placed on the skin. In the present paper we propose a physiological ECG/PPG \"combo\" pipeline using an innovative bio-inspired nonlinear system based on a reaction-diffusion mathematical model, implemented by means of the Cellular Neural Network (CNN) methodology, to filter PPG signal by assigning a recognition score to the waveforms in the time series. The resulting \"clean\" PPG signal exempts from distortion and artifacts is used to validate for diagnostic purpose an EGC signal simultaneously detected for a same patient. The multisite combo PPG-ECG system proposed in this work overpasses the limitations of the state of the art in this field providing a reliable system for assessing the above-mentioned physiological parameters and their monitoring over time for robust medical assessment. The proposed system has been validated and the results confirmed the robustness of the proposed approach.",
"title": ""
},
{
"docid": "70ea3e32d4928e7fd174b417ec8b6d0e",
"text": "We show that invariance in a deep neural network is equivalent to information minimality of the representation it computes, and that stacking layers and injecting noise during training naturally bias the network towards learning invariant representations. Then, we show that overfitting is related to the quantity of information stored in the weights, and derive a sharp bound between this information and the minimality and Total Correlation of the layers. This allows us to conclude that implicit and explicit regularization of the loss function not only help limit overfitting, but also foster invariance and disentangling of the learned representation. We also shed light on the properties of deep networks in relation to the geometry of the loss function.",
"title": ""
},
{
"docid": "ae186ad5dce2bd3b32ffa993d33625a5",
"text": "We present a system for acquiring, processing, and rendering panoramic light field still photography for display in Virtual Reality (VR). We acquire spherical light field datasets with two novel light field camera rigs designed for portable and efficient light field acquisition. We introduce a novel real-time light field reconstruction algorithm that uses a per-view geometry and a disk-based blending field. We also demonstrate how to use a light field prefiltering operation to project from a high-quality offline reconstruction model into our real-time model while suppressing artifacts. We introduce a practical approach for compressing light fields by modifying the VP9 video codec to provide high quality compression with real-time, random access decompression.\n We combine these components into a complete light field system offering convenient acquisition, compact file size, and high-quality rendering while generating stereo views at 90Hz on commodity VR hardware. Using our system, we built a freely available light field experience application called Welcome to Light Fields featuring a library of panoramic light field stills for consumer VR which has been downloaded over 15,000 times.",
"title": ""
},
{
"docid": "efc48edab4d039b94a87d473e0158033",
"text": "The classifier built from a data set with a highly skewed class distribution generally predicts the more frequently occurring classes much more often than the infrequently occurring classes. This is largely due to the fact that most classifiers are designed to maximize accuracy. In many instances, such as for medical diagnosis, this classification behavior is unacceptable because the minority class is the class of primary interest (i.e., it has a much higher misclassification cost than the majority class). In this paper we compare three methods for dealing with data that has a skewed class distribution and nonuniform misclassification costs. The first method incorporates the misclassification costs into the learning algorithm while the other two methods employ oversampling or undersampling to make the training data more balanced. In this paper we empirically compare the effectiveness of these methods in order to determine which produces the best overall classifier—and under what circumstances.",
"title": ""
},
{
"docid": "8abedc8a3f3ad84c940e38735b759745",
"text": "Degeneration is a senescence process that occurs in all living organisms. Although tremendous efforts have been exerted to alleviate this degenerative tendency, minimal progress has been achieved to date. The nematode, Caenorhabditis elegans (C. elegans), which shares over 60% genetic similarities with humans, is a model animal that is commonly used in studies on genetics, neuroscience, and molecular gerontology. However, studying the effect of exercise on C. elegans is difficult because of its small size unlike larger animals. To this end, we fabricated a flow chamber, called \"worm treadmill,\" to drive worms to exercise through swimming. In the device, the worms were oriented by electrotaxis on demand. After the exercise treatment, the lifespan, lipofuscin, reproductive capacity, and locomotive power of the worms were analyzed. The wild-type and the Alzheimer's disease model strains were utilized in the assessment. Although degeneration remained irreversible, both exercise-treated strains indicated an improved tendency compared with their control counterparts. Furthermore, low oxidative stress and lipofuscin accumulation were also observed among the exercise-treated worms. We conjecture that escalated antioxidant enzymes imparted the worms with an extra capacity to scavenge excessive oxidative stress from their bodies, which alleviated the adverse effects of degeneration. Our study highlights the significance of exercise in degeneration from the perspective of the simple life form, C. elegans.",
"title": ""
},
{
"docid": "7c829563e98a6c75eb9b388bf0627271",
"text": "Research in learning analytics and educational data mining has recently become prominent in the fields of computer science and education. Most scholars in the field emphasize student learning and student data analytics; however, it is also important to focus on teaching analytics and teacher preparation because of their key roles in student learning, especially in K-12 learning environments. Nonverbal communication strategies play an important role in successful interpersonal communication of teachers with their students. In order to assist novice or practicing teachers with exhibiting open and affirmative nonverbal cues in their classrooms, we have designed a multimodal teaching platform with provisions for online feedback. We used an interactive teaching rehearsal software, TeachLivE, as our basic research environment. TeachLivE employs a digital puppetry paradigm as its core technology. Individuals walk into this virtual environment and interact with virtual students displayed on a large screen. They can practice classroom management, pedagogy and content delivery skills with a teaching plan in the TeachLivE environment. We have designed an experiment to evaluate the impact of an online nonverbal feedback application. In this experiment, different types of multimodal data have been collected during two experimental settings. These data include talk-time and nonverbal behaviors of the virtual students, captured in log files; talk time and full body tracking data of the participant; and video recording of the virtual classroom with the participant. 34 student teachers participated in this 30-minute experiment. In each of the settings, the participants were provided with teaching plans from which they taught. All the participants took part in both of the experimental settings. In order to have a balanced experiment design, half of the participants received nonverbal online feedback in their first session and the other half received this feedback in the second session. A visual indication was used for feedback each time the participant exhibited a closed, defensive posture. Based on recorded full-body tracking data, we observed that only those who received feedback in their first session demonstrated a significant number of open postures in the session containing no feedback. However, the post-questionnaire information indicated that all participants were more mindful of their body postures while teaching after they had participated in the study.",
"title": ""
},
{
"docid": "fd809ccbf0042b84147e88e4009ab894",
"text": "Professional sports is a roughly $500 billion dollar industry that is increasingly data-driven. In this paper we show how machine learning can be applied to generate a model that could lead to better on-field decisions by managers of professional baseball teams. Specifically we show how to use regularized linear regression to learn pitcher-specific predictive models that can be used to help decide when a starting pitcher should be replaced. A key step in the process is our method of converting categorical variables (e.g., the venue in which a game is played) into continuous variables suitable for the regression. Another key step is dealing with situations in which there is an insufficient amount of data to compute measures such as the effectiveness of a pitcher against specific batters. \n For each season we trained on the first 80% of the games, and tested on the rest. The results suggest that using our model could have led to better decisions than those made by major league managers. Applying our model would have led to a different decision 48% of the time. For those games in which a manager left a pitcher in that our model would have removed, the pitcher ended up performing poorly 60% of the time.",
"title": ""
},
{
"docid": "e2be1b93be261deac59b5afde2f57ae1",
"text": "The electronic and transport properties of carbon nanotube has been investigated in presence of ammonia gas molecule, using Density Functional Theory (DFT) based ab-initio approach. The model of CNT sensor has been build using zigzag (7, 0) CNT with a NH3 molecule adsorbed on its surface. The presence of NH3 molecule results in increase of CNT band gap. From the analysis of I-V curve, it is observed that the adsorption of NH3 leads to different voltage and current curve in comparison to its pristine state confirms the presence of NH3.",
"title": ""
},
{
"docid": "5e7976392b26e7c2172d2e5c02d85c57",
"text": "A multiprocessor virtual machine benefits its guest operating system in supporting scalable job throughput and request latency—useful properties in server consolidation where servers require several of the system processors for steady state or to handle load bursts. Typical operating systems, optimized for multiprocessor systems in their use of spin-locks for critical sections, can defeat flexible virtual machine scheduling due to lock-holder preemption and misbalanced load. The virtual machine must assist the guest operating system to avoid lock-holder preemption and to schedule jobs with knowledge of asymmetric processor allocation. We want to support a virtual machine environment with flexible scheduling policies, while maximizing guest performance. This paper presents solutions to avoid lock-holder preemption for both fully virtualized and paravirtualized environments. Experiments show that we can nearly eliminate the effects of lock-holder preemption. Furthermore, the paper presents a scheduler feedback mechanism that despite the presence of asymmetric processor allocation achieves optimal and fair load balancing in the guest operating system.",
"title": ""
},
{
"docid": "f0af945042c44b20d6bd9f81a0b21b6b",
"text": "We investigate a technique to adapt unsupervised word embeddings to specific applications, when only small and noisy labeled datasets are available. Current methods use pre-trained embeddings to initialize model parameters, and then use the labeled data to tailor them for the intended task. However, this approach is prone to overfitting when the training is performed with scarce and noisy data. To overcome this issue, we use the supervised data to find an embedding subspace that fits the task complexity. All the word representations are adapted through a projection into this task-specific subspace, even if they do not occur on the labeled dataset. This approach was recently used in the SemEval 2015 Twitter sentiment analysis challenge, attaining state-of-the-art results. Here we show results improving those of the challenge, as well as additional experiments in a Twitter Part-Of-Speech tagging task.",
"title": ""
},
{
"docid": "babdf14e560236f5fcc8a827357514e5",
"text": "Email: [email protected] Abstract: The NP-hard (complete) team orienteering problem is a particular vehicle routing problem with the aim of maximizing the profits gained from visiting control points without exceeding a travel cost limit. The team orienteering problem has a number of applications in several fields such as athlete recruiting, technician routing and tourist trip. Therefore, solving optimally the team orienteering problem would play a major role in logistic management. In this study, a novel randomized population constructive heuristic is introduced. This heuristic constructs a diversified initial population for population-based metaheuristics. The heuristics proved its efficiency. Indeed, experiments conducted on the well-known benchmarks of the team orienteering problem show that the initial population constructed by the presented heuristic wraps the best-known solution for 131 benchmarks and good solutions for a great number of benchmarks.",
"title": ""
},
{
"docid": "5a601e08824185bafeb94ac432b6e92e",
"text": "Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.",
"title": ""
},
{
"docid": "ede6bef7b623e95cf99b1d7c85332abb",
"text": "The design of a temperature compensated IC on-chip oscillator and a low voltage detection circuitry sharing the bandgap reference is described. The circuit includes a new bandgap isolation strategy to reduce oscillator noise coupled through the current sources. The IC oscillator provides a selectable clock (11.6 MHz or 21.4 MHz) with digital trimming to minimize process variations. After fine-tuning the oscillator to the target frequency, the temperature compensated voltage and current references guarantees less than /spl plusmn/2.5% frequency variation from -40 to 125/spl deg/C, when operating from 3 V to 5 V of power supply. The low voltage detection circuit monitors the supply voltage applied to the system and generates the appropriate warning or even initiates a system shutdown before the in-circuit SoC presents malfunction. The module was implemented in a 0.5 /spl mu/m CMOS technology, occupies an area of 360 /spl times/ 530 /spl mu/m/sub 2/ and requires no external reference or components.",
"title": ""
},
{
"docid": "1eb6558dab37b34d3c7c261654535104",
"text": "We present a learning framework for abstracting complex shapes by learning to assemble objects using 3D volumetric primitives. In addition to generating simple and geometrically interpretable explanations of 3D objects, our framework also allows us to automatically discover and exploit consistent structure in the data. We demonstrate that using our method allows predicting shape representations which can be leveraged for obtaining a consistent parsing across the instances of a shape collection and constructing an interpretable shape similarity measure. We also examine applications for image-based prediction as well as shape manipulation.",
"title": ""
},
{
"docid": "e4b6dbd8238160457f14aacb8f9717ff",
"text": "Abs t r ac t . The PKZIP program is one of the more widely used archive/ compression programs on personM, computers. It also has many compatible variants on other computers~ and is used by most BBS's and ftp sites to compress their archives. PKZIP provides a stream cipher which allows users to scramble files with variable length keys (passwords). In this paper we describe a known pla.intext attack on this cipher, which can find the internal representation of the key within a few hours on a personal computer using a few hundred bytes of known plaintext. In many cases, the actual user keys can also be found from the internal representation. We conclude that the PKZIP cipher is weak, and should not be used to protect valuable data.",
"title": ""
},
{
"docid": "4096499f4e34f6c1f0c3bb0bb63fb748",
"text": "A detailed examination of evolving traffic characteristics, operator requirements, and network technology trends suggests a move away from nonblocking interconnects in data center networks (DCNs). As a result, recent efforts have advocated oversubscribed networks with the capability to adapt to traffic requirements on-demand. In this paper, we present the design, implementation, and evaluation of OSA, a novel Optical Switching Architecture for DCNs. Leveraging runtime reconfigurable optical devices, OSA dynamically changes its topology and link capacities, thereby achieving unprecedented flexibility to adapt to dynamic traffic patterns. Extensive analytical simulations using both real and synthetic traffic patterns demonstrate that OSA can deliver high bisection bandwidth (60%-100% of the nonblocking architecture). Implementation and evaluation of a small-scale functional prototype further demonstrate the feasibility of OSA.",
"title": ""
},
{
"docid": "1d9b50bf7fa39c11cca4e864bbec5cf3",
"text": "FPGA-based embedded soft vector processors can exceed the performance and energy-efficiency of embedded GPUs and DSPs for lightweight deep learning applications. For low complexity deep neural networks targeting resource constrained platforms, we develop optimized Caffe-compatible deep learning library routines that target a range of embedded accelerator-based systems between 4 -- 8 W power budgets such as the Xilinx Zedboard (with MXP soft vector processor), NVIDIA Jetson TK1 (GPU), InForce 6410 (DSP), TI EVM5432 (DSP) as well as the Adapteva Parallella board (custom multi-core with NoC). For MNIST (28×28 images) and CIFAR10 (32×32 images), the deep layer structure is amenable to MXP-enhanced FPGA mappings to deliver 1.4 -- 5× higher energy efficiency than all other platforms. Not surprisingly, embedded GPU works better for complex networks with large image resolutions.",
"title": ""
},
{
"docid": "5f49c93d7007f0f14f1410ce7805b29a",
"text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.",
"title": ""
},
{
"docid": "50c639dfa7063d77cda26666eabeb969",
"text": "This paper addresses the problem of detecting people in two dimensional range scans. Previous approaches have mostly used pre-defined features for the detection and tracking of people. We propose an approach that utilizes a supervised learning technique to create a classifier that facilitates the detection of people. In particular, our approach applies AdaBoost to train a strong classifier from simple features of groups of neighboring beams corresponding to legs in range data. Experimental results carried out with laser range data illustrate the robustness of our approach even in cluttered office environments",
"title": ""
}
] |
scidocsrr
|
e7ca898a2a3ba288b2ae071ee3330a46
|
Autonomous Driving in Traffic: Boss and the Urban Challenge
|
[
{
"docid": "e77c136b2d3e4afb36b27eeda946a37d",
"text": "We present the motion planning framework for an autonomous vehicle navigating through urban environments. Such environments present a number of motion planning challenges, including ultra-reliability, high-speed operation, complex inter-vehicle interaction, parking in large unstructured lots, and constrained maneuvers. Our approach combines a model-predictive trajectory generation algorithm for computing dynamically-feasible actions with two higher-level planners for generating long range plans in both on-road and unstructured areas of the environment. In this Part II of a two-part paper, we describe the unstructured planning component of this system used for navigating through parking lots and recovering from anomalous on-road scenarios. We provide examples and results from ldquoBossrdquo, an autonomous SUV that has driven itself over 3000 kilometers and competed in, and won, the Urban Challenge.",
"title": ""
}
] |
[
{
"docid": "d552b6beeea587bc014a4c31cabee121",
"text": "Recent successes of neural networks in solving combinatorial problems and games like Go, Poker and others inspire further attempts to use deep learning approaches in discrete domains. In the field of automated planning, the most popular approach is informed forward search driven by a heuristic function which estimates the quality of encountered states. Designing a powerful and easily-computable heuristics however is still a challenging problem on many domains. In this paper, we use machine learning to construct such heuristic automatically. We train a neural network to predict a minimal number of moves required to solve a given instance of Rubik’s cube. We then use the trained network as a heuristic distance estimator with a standard forward-search algorithm and compare the results with other heuristics. Our experiments show that the learning approach is competitive with state-of-the-art and might be the best choice in some use-case scenarios.",
"title": ""
},
{
"docid": "4a9b4668296561b3522c3c57c64220c1",
"text": "Hyperspectral imagery, which contains hundreds of spectral bands, has the potential to better describe the biological and chemical attributes on the plants than multispectral imagery and has been evaluated in this paper for the purpose of crop yield estimation. The spectrum of each pixel in a hyperspectral image is considered as a linear combinations of the spectra of the vegetation and the bare soil. Recently developed linear unmixing approaches are evaluated in this paper, which automatically extracts the spectra of the vegetation and bare soil from the images. The vegetation abundances are then computed based on the extracted spectra. In order to reduce the influences of this uncertainty and obtain a robust estimation results, the vegetation abundances extracted on two different dates on the same fields are then combined. The experiments are carried on the multidate hyperspectral images taken from two grain sorghum fields. The results show that the correlation coefficients between the vegetation abundances obtained by unsupervised linear unmixing approaches are as good as the results obtained by supervised methods, where the spectra of the vegetation and bare soil are measured in the laboratory. In addition, the combination of vegetation abundances extracted on different dates can improve the correlations (from 0.6 to 0.7).",
"title": ""
},
{
"docid": "305679866d219b0856ed48230f30c549",
"text": "The contingency table is a work horse of official statistics, the format of reported data for the US Census, Bureau of Labor Statistics, and the Internal Revenue Service. In many settings such as these privacy is not only ethically mandated, but frequently legally as well. Consequently there is an extensive and diverse literature dedicated to the problems of statistical disclosure control in contingency table release. However, all current techniques for reporting contingency tables fall short on at leas one of privacy, accuracy, and consistency (among multiple released tables). We propose a solution that provides strong guarantees for all three desiderata simultaneously.\n Our approach can be viewed as a special case of a more general approach for producing synthetic data: Any privacy-preserving mechanism for contingency table release begins with raw data and produces a (possibly inconsistent) privacy-preserving set of marginals. From these tables alone-and hence without weakening privacy--we will find and output the \"nearest\" consistent set of marginals. Interestingly, this set is no farther than the tables of the raw data, and consequently the additional error introduced by the imposition of consistency is no more than the error introduced by the privacy mechanism itself.\n The privacy mechanism of [20] gives the strongest known privacy guarantees, with very little error. Combined with the techniques of the current paper, we therefore obtain excellent privacy, accuracy, and consistency among the tables. Moreover, our techniques are surprisingly efficient. Our techniques apply equally well to the logical cousin of the contingency table, the OLAP cube.",
"title": ""
},
{
"docid": "4cb41f9de259f18cd8fe52d2f04756a6",
"text": "The Effects of Lottery Prizes on Winners and their Neighbors: Evidence from the Dutch Postcode Lottery Each week, the Dutch Postcode Lottery (PCL) randomly selects a postal code, and distributes cash and a new BMW to lottery participants in that code. We study the effects of these shocks on lottery winners and their neighbors. Consistent with the life-cycle hypothesis, the effects on winners’ consumption are largely confined to cars and other durables. Consistent with the theory of in-kind transfers, the vast majority of BMW winners liquidate their BMWs. We do, however, detect substantial social effects of lottery winnings: PCL nonparticipants who live next door to winners have significantly higher levels of car consumption than other nonparticipants. JEL Classification: D12, C21",
"title": ""
},
{
"docid": "749800c4dae57eb13b5c3df9e0c302a0",
"text": "In a contemporary clinical laboratory it is very common to have to assess the agreement between two quantitative methods of measurement. The correct statistical approach to assess this degree of agreement is not obvious. Correlation and regression studies are frequently proposed. However, correlation studies the relationship between one variable and another, not the differences, and it is not recommended as a method for assessing the comparability between methods.
In 1983 Altman and Bland (B&A) proposed an alternative analysis, based on the quantification of the agreement between two quantitative measurements by studying the mean difference and constructing limits of agreement.
The B&A plot analysis is a simple way to evaluate a bias between the mean differences, and to estimate an agreement interval, within which 95% of the differences of the second method, compared to the first one, fall. Data can be analyzed both as unit differences plot and as percentage differences plot.
The B&A plot method only defines the intervals of agreements, it does not say whether those limits are acceptable or not. Acceptable limits must be defined a priori, based on clinical necessity, biological considerations or other goals.
The aim of this article is to provide guidance on the use and interpretation of Bland Altman analysis in method comparison studies.",
"title": ""
},
{
"docid": "09985252933e82cf1615dabcf1e6d9a2",
"text": "Facial landmark detection plays a very important role in many facial analysis applications such as identity recognition, facial expression analysis, facial animation, 3D face reconstruction as well as facial beautification. With the recent advance of deep learning, the performance of facial landmark detection, including on unconstrained inthe-wild dataset, has seen considerable improvement. This paper presents a survey of deep facial landmark detection for 2D images and video. A comparative analysis of different face alignment approaches is provided as well as some future research directions.",
"title": ""
},
{
"docid": "56002273444d2078d5db47671255555a",
"text": "The credit card has become the most popular mode of payment for both online as well as regular purchase, in cases of fraud associated with it are also rising. Credit card frauds are increasing day by day regardless of various techniques developed for its detection. Fraudsters are so experts that they generate new ways of committing fraudulent transactions each day which demands constant innovation for its detection techniques. Most of the techniques based on Artificial Intelligence, Fuzzy Logic, Neural Network, Logistic Regression, Naïve Bayesian, Machine Learning, Sequence Alignment, Decision tree, Bayesian network, meta learning, Genetic programming etc., these are evolved in detecting various credit card fraudulent transactions. This paper presents a survey of various techniques used in various credit card fraud detection mechanisms.",
"title": ""
},
{
"docid": "b53f2f922661bfb14bf2181236fad566",
"text": "In many real world applications of machine learning, the distribution of the training data (on which the machine learning model is trained) is different from the distribution of the test data (where the learnt model is actually deployed). This is known as the problem of Domain Adaptation. We propose a novel deep learning model for domain adaptation which attempts to learn a predictively useful representation of the data by taking into account information from the distribution shift between the training and test data. Our key proposal is to successively learn multiple intermediate representations along an “interpolating path” between the train and test domains. Our experiments on a standard object recognition dataset show a significant performance improvement over the state-of-the-art. 1. Problem Motivation and Context Oftentimes in machine learning applications, we have to learn a model to accomplish a specific task using training data drawn from one distribution (the source domain), and deploy the learnt model on test data drawn from a different distribution (the target domain). For instance, consider the task of creating a mobile phone application for “image search for products”; where the goal is to look up product specifications and comparative shopping options from the internet, given a picture of the product taken with a user’s mobile phone. In this case, the underlying object recognizer will typically be trained on a labeled corpus of images (perhaps scraped from the internet), and tested on the images taken using the user’s phone camera. The challenge here is that the distribution of training and test images is not the same. A naively Appeared in the proceedings of the ICML 2013, Workshop on Representation Learning, Atlanta, Georgia, USA, 2013. trained object recognizer, that is just trained on the training images and applied directly to the test images, cannot be expected to have good performance. Such issues of a mismatched train and test sets occur not only in the field of Computer Vision (Duan et al., 2009; Jain & Learned-Miller, 2011; Wang & Wang, 2011), but also in Natural Language Processing (Blitzer et al., 2006; 2007; Glorot et al., 2011), and Automatic Speech Recognition (Leggetter & Woodland, 1995). The problem of differing train and test data distributions is referred to as Domain Adaptation (Daume & Marcu, 2006; Daume, 2007). Two variations of this problem are commonly discussed in the literature. In the first variation, known as Unsupervised Domain Adaptation, no target domain labels are provided during training. One only has access to source domain labels. In the second version of the problem, called Semi-Supervised Domain Adaptation, besides access to source domain labels, we additionally assume access to a few target domain labels during training. Previous approaches to domain adaptation can broadly be classified into a few main groups. One line of research starts out assuming the input representations are fixed (the features given are not learnable) and seeks to address domain shift by modeling the source/target distributional difference via transformations of the given representation. These transformations lead to a different distance metric which can be used in the domain adaptation classification/regression task. This is the approach taken, for instance, in (Saenko et al., 2010) and the recent linear manifold papers of (Gopalan et al., 2011; Gong et al., 2012). Another set of approaches in this fixed representation view of the problem treats domain adaptation as a conventional semi-supervised learning (Bergamo & Torresani, 2010; Dai et al., 2007; Yang et al., 2007; Duan et al., 2012). These works essentially construct a classifier using the labeled source data, and Often, the number of such labelled target samples are not sufficient to train a robust model using target data alone. DLID: Deep Learning for Domain Adaptation by Interpolating between Domains impose structural constraints on the classifier using unlabeled target data. A second line of research focusses on directly learning the representation of the inputs that is somewhat invariant across domains. Various models have been proposed (Daume, 2007; Daume et al., 2010; Blitzer et al., 2006; 2007; Pan et al., 2009), including deep learning models (Glorot et al., 2011). There are issues with both kinds of the previous proposals. In the fixed representation camp, the type of projection or structural constraint imposed often severely limits the capacity/strength of representations (linear projections for example, are common). In the representation learning camp, existing deep models do not attempt to explicitly encode the distributional shift between the source and target domains. In this paper we propose a novel deep learning model for the problem of domain adaptation which combines ideas from both of the previous approaches. We call our model (DLID): Deep Learning for domain adaptation by Interpolating between Domains. By operating in the deep learning paradigm, we also learn hierarchical non-linear representation of the source and target inputs. However, we explicitly define and use an “interpolating path” between the source and target domains while learning the representation. This interpolating path captures information about structures intermediate to the source and target domains. The resulting representation we obtain is highly rich (containing source to target path information) and allows us to handle the domain adaptation task extremely well. There are multiple benefits to our approach compared to those proposed in the literature. First, we are able to train intricate non-linear representations of the input, while explicitly modeling the transformation between the source and target domains. Second, instead of learning a representation which is independent of the final task, our model can learn representations with information from the final classification/regression task. This is achieved by fine-tuning the pre-trained intermediate feature extractors using feedback from the final task. Finally, our approach can gracefully handle additional training data being made available in the future. We would simply fine-tune our model with the new data, as opposed to having to retrain the entire model again from scratch. We evaluate our model on the domain adaptation problem of object recognition on a standard dataset (Saenko et al., 2010). Empirical results show that our model out-performs the state of the art by a significant margin. In some cases there is an improvement of over 40% from the best previously reported results. An analysis of the learnt representations sheds some light onto the properties that result in such excellent performance (Ben-David et al., 2007). 2. An Overview of DLID At a high level, the DLID model is a deep neural network model designed specifically for the problem of domain adaptation. Deep networks have had tremendous success recently, achieving state-of-the-art performance on a number of machine learning tasks (Bengio, 2009). In large part, their success can be attributed to their ability to learn extremely powerful hierarchical non-linear representations of the inputs. In particular, breakthroughs in unsupervised pre-training (Bengio et al., 2006; Hinton et al., 2006; Hinton & Salakhutdinov, 2006; Ranzato et al., 2006), have been critical in enabling deep networks to be trained robustly. As with other deep neural network models, DLID also learns its representation using unsupervised pretraining. The key difference is that in DLID model, we explicitly capture information from an “interpolating path” between the source domain and the target domain. As mentioned in the introduction, our interpolating path is motivated by the ideas discussed in Gopalan et al. (2011); Gong et al. (2012). In these works, the original high dimensional features are linearly projected (typically via PCA/PLS) to a lower dimensional space. Because these are linear projections, the source and target lower dimensional subspaces lie on the Grassman manifold. Geometric properties of the manifold, like shortest paths (geodesics), present an interesting and principled way to transition/interpolate smoothly between the source and target subspaces. It is this path information on the manifold that is used by Gopalan et al. (2011); Gong et al. (2012) to construct more robust and accurate classifiers for the domain adaptation task. In DLID, we define a somewhat different notion of an interpolating path between source and target domains, but appeal to a similar intuition. Figure 1 shows an illustration of our model. Let the set of data samples for the source domain S be denoted by DS , and that of the target domain T be denoted by DT . Starting with all the source data samples DS , we generate intermediate sampled datasets, where for each successive dataset we gradually increase the proportion of samples randomly drawn from DT , and decrease the proportion of samples drawn from DS . In particular, let p ∈ [1, . . . , P ] be an index over the P datasets we generate. Then we have Dp = DS for p = 1, Dp = DT for p = P . For p ∈ [2, . . . , P − 1], datasets Dp and Dp+1 are created in a way so that the proportion of samples from DT in Dp is less than in Dp+1. Each of these data sets can be thought of as a single point on a particular kind of interpolating path between S and T . DLID: Deep Learning for Domain Adaptation by Interpolating between Domains",
"title": ""
},
{
"docid": "785c716d4f127a5a5fee02bc29aeb352",
"text": "In this paper we propose a novel, improved, phase generated carrier (PGC) demodulation algorithm based on the PGC-differential-cross-multiplying approach (PGC-DCM). The influence of phase modulation amplitude variation and light intensity disturbance (LID) on traditional PGC demodulation algorithms is analyzed theoretically and experimentally. An experimental system for remote no-contact microvibration measurement is set up to confirm the stability of the improved PGC algorithm with LID. In the experiment, when the LID with a frequency of 50 Hz and the depth of 0.3 is applied, the signal-to-noise and distortion ratio (SINAD) of the improved PGC algorithm is 19 dB, higher than the SINAD of the PGC-DCM algorithm, which is 8.7 dB.",
"title": ""
},
{
"docid": "c2c81d5f7c1be2f6a877811cd61f055d",
"text": "Since the cognitive revolution of the sixties, representation has served as the central concept of cognitive theory and representational theories of mind have provided the establishment view in cognitive science (Fodor, 1980; Gardner, 1985; Vera & Simon, 1993). Central to this line of thinking is the belief that knowledge exists solely in the head, and instruction involves finding the most efficient means for facilitating the “acquisition” of this knowledge (Gagne, Briggs, & Wager, 1993). Over the last two decades, however, numerous educational psychologists and instructional designers have begun abandoning cognitive theories that emphasize individual thinkers and their isolated minds. Instead, these researchers have adopted theories that emphasize the social and contextualized nature of cognition and meaning (Brown, Collins, & Duguid, 1989; Greeno, 1989, 1997; Hollan, Hutchins, & Kirsch, 2000; Lave & Wenger, 1991; Resnick, 1987; Salomon, 1993). Central to these reconceptualizations is an emphasis on contextualized activity and ongoing participation as the core units of analysis (Barab & Kirshner, 2001; Barab & Plucker, 2002; Brown & Duguid, 1991; Cook & Yanow, 1993;",
"title": ""
},
{
"docid": "5c30ecda39e41e2b32659e12c9585ba6",
"text": "We extend the arc-hybrid transition system for dependency parsing with a SWAP transition that enables reordering of the words and construction of non-projective trees. Although this extension potentially breaks the arc-decomposability of the transition system, we show that the existing dynamic oracle can be modified and combined with a static oracle for the SWAP transition. Experiments on five languages with different degrees of non-projectivity show that the new system gives competitive accuracy and is significantly better than a system trained with a purely static oracle.",
"title": ""
},
{
"docid": "77ac3a28ffa420a1e4f1366d36b4c188",
"text": " Call-Exner bodies are present in ovarian follicles of a range of species including human and rabbit, and in a range of human ovarian tumors. We have also found structures resembling Call-Exner bodies in bovine preantral and small antral follicles. Hematoxylin and eosin staining of single sections of bovine ovaries has shown that 30% of preantral follicles with more than one layer of granulosa cells and 45% of small (less than 650 μm) antral follicles have at least one Call-Exner body composed of a spherical eosinophilic region surrounded by a rosette of granulosa cells. Alcian blue stains the spherical eosinophilic region of the Call-Exner bodies. Electron microscopy has demonstrated that some Call-Exner bodies contain large aggregates of convoluted basal lamina, whereas others also contain regions of unassembled basal-lamina-like material. Individual chains of the basal lamina components type IV collagen (α1 to α5) and laminin (α1, β2 and δ1) have been immunolocalized to Call-Exner bodies in sections of fresh-frozen ovaries. Bovine Call-Exner bodies are presumably analogous to Call-Exner bodies in other species but are predominantly found in preantral and small antral follicles, rather than large antral follicles. With follicular development, the basal laminae of Call-Exner bodies change in their apparent ratio of type IV collagen to laminin, similar to changes observed in the follicular basal lamina, suggesting that these structures have a common cellular origin.",
"title": ""
},
{
"docid": "bf5874dc1fc1c968d7c41eb573d8d04a",
"text": "As creativity is increasingly recognised as a vital component of entrepreneurship, researchers and educators struggle to reform enterprise pedagogy. To help in this effort, we use a personality test and open-ended interviews to explore creativity between two groups of entrepreneurship masters’ students: one at a business school and one at an engineering school. The findings indicate that both groups had high creative potential, but that engineering students channelled this into practical and incremental efforts whereas the business students were more speculative and had a clearer market focus. The findings are drawn on to make some suggestions for entrepreneurship education.",
"title": ""
},
{
"docid": "5b1f814b7d8f1495733f0dc391449296",
"text": "Abstruct-A class of digital h e a r phase fiiite impulse response (FIR) filters for decimation (sampling rate decrease) and interpolation (sampling rate increase) are presented. They require no multipliers and use limited storage making them an economical alternative to conventional implementations for certain applications. A digital fiiter in this class consists of cascaded ideal integrator stages operating at a high sampling rate and an equal number of comb stages operating at a low sampling rate. Together, a single integrator-comb pair produces a uniform FIR. The number of cascaded integrator-comb pairs is chosen to meet design requirements for aliasing or imaging error. Design procedures and examples are given for both decimation and interpolation filters with the emphasis on frequency response and register width.",
"title": ""
},
{
"docid": "e5eb79b313dad91de1144cd0098cde15",
"text": "Information Extraction aims to retrieve certain types of information from natural language text by processing them automatically. For example, an information extraction system might retrieve information about geopolitical indicators of countries from a set of web pages while ignoring other types of information. Ontology-based information extraction has recently emerged as a subfield of information extraction. Here, ontologies which provide formal and explicit specifications of conceptualizations play a crucial role in the information extraction process. Because of the use of ontologies, this field is related to knowledge representation and has the potential to assist the development of the Semantic Web. In this paper, we provide an introduction to ontology-based information extraction and review the details of different ontology-based information extraction systems developed so far. We attempt to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation. We also discuss the implementation details of these systems including the tools used by them and the metrics used to measure their performance. In addition, we attempt to identify the possible future directions for this field.",
"title": ""
},
{
"docid": "183e715ca8e5c329ba58387d31e2f0f7",
"text": "We develop a system for 3D object retrieval based on sketched feature lines as input. For objective evaluation, we collect a large number of query sketches from human users that are related to an existing data base of objects. The sketches turn out to be generally quite abstract with large local and global deviations from the original shape. Based on this observation, we decide to use a bag-of-features approach over computer generated line drawings of the objects. We develop a targeted feature transform based on Gabor filters for this system. We can show objectively that this transform is better suited than other approaches from the literature developed for similar tasks. Moreover, we demonstrate how to optimize the parameters of our, as well as other approaches, based on the gathered sketches. In the resulting comparison, our approach is significantly better than any other system described so far.",
"title": ""
},
{
"docid": "fe11fc1282a7efc34a9efe0e81fb21d6",
"text": "Increased complexity in modern embedded systems has presented various important challenges with regard to side-channel attacks. In particular, it is common to deploy SoC-based target devices with high clock frequencies in security-critical scenarios; understanding how such features align with techniques more often deployed against simpler devices is vital from both destructive (i.e., attack) and constructive (i.e., evaluation and/or countermeasure) perspectives. In this paper, we investigate electromagnetic-based leakage from three different means of executing cryptographic workloads (including the general purpose ARM core, an on-chip co-processor, and the NEON core) on the AM335x SoC. Our conclusion is that addressing challenges of the type above is feasible, and that key recovery attacks can be conducted with modest resources.",
"title": ""
},
{
"docid": "c94e5133c083193227b26a9fb35a1fbd",
"text": "Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called \"Virtual KITTI\", automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking.",
"title": ""
},
{
"docid": "4f2fa6ee3a5e7a4b9a7472993b992439",
"text": "PURPOSE\nThe purpose of this research was to develop and evaluate a severity rating score for fecal incontinence, the Fecal Incontinence Severity Index.\n\n\nMETHODS\nThe Fecal Incontinence Severity Index is based on a type x frequency matrix. The matrix includes four types of leakage commonly found in the fecal incontinent population: gas, mucus, and liquid and solid stool and five frequencies: one to three times per month, once per week, twice per week, once per day, and twice per day. The Fecal Incontinence Severity Index was developed using both colon and rectal surgeons and patient input for the specification of the weighting scores.\n\n\nRESULTS\nSurgeons and patients had very similar weightings for each of the type x frequency combinations; significant differences occurred for only 3 of the 20 different weights. The Fecal Incontinence Severity Index score of a group of patients with fecal incontinence (N = 118) demonstrated significant correlations with three of the four scales found in a fecal incontinence quality-of-life scale.\n\n\nCONCLUSIONS\nEvaluation of the Fecal Incontinence Severity Index indicates that the index is a tool that can be used to assess severity of fecal incontinence. Overall, patient and surgeon ratings of severity are similar, with minor differences associated with the accidental loss of solid stool.",
"title": ""
},
{
"docid": "ffd7afcf6e3b836733b80ed681e2a2b9",
"text": "The emergence of cloud management systems, and the adoption of elastic cloud services enable dynamic adjustment of cloud hosted resources and provisioning. In order to effectively provision for dynamic workloads presented on cloud platforms, an accurate forecast of the load on the cloud resources is required. In this paper, we investigate various forecasting methods presented in recent research, identify and adapt evaluation metrics used in literature and compare forecasting methods on prediction performance. We investigate the performance gain of ensemble models when combining three of the best performing models into one model. We find that our 30th order Auto-regression model and Feed-Forward Neural Network method perform the best when evaluated on Google's Cluster dataset and using the provision specific metrics identified. We also show an improvement in forecasting accuracy when evaluating two ensemble models.",
"title": ""
}
] |
scidocsrr
|
fcf67fb98e8b93d1221462e904c3dede
|
Detecting bad smells in source code using change history information
|
[
{
"docid": "55b405991dc250cd56be709d53166dca",
"text": "In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as item set concise representations, redundancy reduction, and post processing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient post processing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the post processing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the post processing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process. KeywordsClustering, classification, and association rules, interactive data exploration and discovery, knowledge management applications.",
"title": ""
}
] |
[
{
"docid": "9f21792dbe89fa95d85e7210cf1de9c6",
"text": "Convolutional Neural Networks have provided state-of-the-art results in several computer vision problems. However, due to a large number of parameters in CNNs, they require a large number of training samples which is a limiting factor for small sample size problems. To address this limitation, we propose SSF-CNN which focuses on learning the \"structure\" and \"strength\" of filters. The structure of the filter is initialized using a dictionary based filter learning algorithm and the strength of the filter is learned using the small sample training data. The architecture provides the flexibility of training with both small and large training databases, and yields good accuracies even with small size training data. The effectiveness of the algorithm is first demonstrated on MNIST, CIFAR10, and NORB databases, with varying number of training samples. The results show that SSF-CNN significantly reduces the number of parameters required for training while providing high accuracies on the test databases. On small sample size problems such as newborn face recognition and Omniglot, it yields state-of-the-art results. Specifically, on the IIITD Newborn Face Database, the results demonstrate improvement in rank-1 identification accuracy by at least 10%.",
"title": ""
},
{
"docid": "82fd11e9e26914d798e7682cf9f393d4",
"text": "We present a new paradigm for real-time object-oriented SLAM with a monocular camera. Contrary to previous approaches, that rely on object-level models, we construct category-level models from CAD collections which are now widely available. To alleviate the need for huge amounts of labeled data, we develop a rendering pipeline that enables synthesis of large datasets from a limited amount of manually labeled data. Using data thus synthesized, we learn category-level models for object deformations in 3D, as well as discriminative object features in 2D. These category models are instance-independent and aid in the design of object landmark observations that can be incorporated into a generic monocular SLAM framework. Where typical object-SLAM approaches usually solve only for object and camera poses, we also estimate object shape on-the-fty, allowing for a wide range of objects from the category to be present in the scene. Moreover, since our 2D object features are learned discriminatively, the proposed object-SLAM system succeeds in several scenarios where sparse feature-based monocular SLAM fails due to insufficient features or parallax. Also, the proposed category-models help in object instance retrieval, useful for Augmented Reality (AR) applications. We evaluate the proposed framework on multiple challenging real-world scenes and show - to the best of our knowledge - first results of an instance-independent monocular object-SLAM system and the benefits it enjoys over feature-based SLAM methods.",
"title": ""
},
{
"docid": "8a2b17f2426118c45f04c76d6b203b62",
"text": "Flood damage assessment is a key component in the development of city flood risk management strategies. A flood damage assessment model is being developed by combining flood hazard information (depth, extent, velocity, duration, etc.) with geographic information (land use/cover, buildings, infrastructure, etc.), social-economic data and population demographics to estimate urban flood impacts. In this paper, Dhaka city is adopted to demonstrate the approach for damage modeling. Analysis show that coarse resolution modeling in such a densely developed urban area can lead to either overor under-estimation of the flood damage, depending on the thresholds to distinguish building and non-building cells in modeling. Increased accuracy can be achieved by fine resolution modeling but this would lead to large data sets that require long computational times. The paper demonstrates some key issues for damage assessment using advanced technology for a City like Dhaka.",
"title": ""
},
{
"docid": "6171a708ea6470b837439ad23af90dff",
"text": "Cardiovascular diseases represent a worldwide relevant socioeconomical problem. Cardiovascular disease prevention relies also on lifestyle changes, including dietary habits. The cardioprotective effects of several foods and dietary supplements in both animal models and in humans have been explored. It was found that beneficial effects are mainly dependent on antioxidant and anti-inflammatory properties, also involving modulation of mitochondrial function. Resveratrol is one of the most studied phytochemical compounds and it is provided with several benefits in cardiovascular diseases as well as in other pathological conditions (such as cancer). Other relevant compounds are Brassica oleracea, curcumin, and berberine, and they all exert beneficial effects in several diseases. In the attempt to provide a comprehensive reference tool for both researchers and clinicians, we summarized in the present paper the existing literature on both preclinical and clinical cardioprotective effects of each mentioned phytochemical. We structured the discussion of each compound by analyzing, first, its cellular molecular targets of action, subsequently focusing on results from applications in both ex vivo and in vivo models, finally discussing the relevance of the compound in the context of human diseases.",
"title": ""
},
{
"docid": "b9d12a2c121823a81902375f6be893bb",
"text": "Internet users are often victimized by malicious attackers. Some attackers infect and use innocent users’ machines to launch large-scale attacks without the users’ knowledge. One of such attacks is the click-fraud attack. Click-fraud happens in Pay-Per-Click (PPC) ad networks where the ad network charges advertisers for every click on their ads. Click-fraud has been proved to be a serious problem for the online advertisement industry. In a click-fraud attack, a user or an automated software clicks on an ad with a malicious intent and advertisers need to pay for those valueless clicks. Among many forms of click-fraud, botnets with the automated clickers are the most severe ones. In this paper, we present a method for detecting automated clickers from the user-side. The proposed method to Fight Click-Fraud, FCFraud, can be integrated into the desktop and smart device operating systems. Since most modern operating systems already provide some kind of anti-malware service, our proposed method can be implemented as a part of the service. We believe that an effective protection at the operating system level can save billions of dollars of the advertisers. Experiments show that FCFraud is 99.6% (98.2% in mobile ad library generated traffic) accurate in classifying ad requests from all user processes and it is 100% successful in detecting clickbots in both desktop and mobile devices. We implement a cloud backend for the FCFraud service to save battery power in mobile devices. The overhead of executing FCFraud is also analyzed and we show that it is reasonable for both the platforms. Copyright c © 2016 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "485f7998056ef7a30551861fad33bef4",
"text": "Research has shown close connections between personality and subjective well-being (SWB), suggesting that personality traits predispose individuals to experience different levels of SWB. Moreover, numerous studies have shown that self-efficacy is related to both personality factors and SWB. Extending previous research, we show that general self-efficacy functionally connects personality factors and two components of SWB (life satisfaction and subjective happiness). Our results demonstrate the mediating role of self-efficacy in linking personality factors and SWB. Consistent with our expectations, the influence of neuroticism, extraversion, openness, and conscientiousness on life satisfaction was mediated by self-efficacy. Furthermore, self-efficacy mediated the influence of openness and conscientiousness, but not that of neuroticism and extraversion, on subjective happiness. Results highlight the importance of cognitive beliefs in functionally linking personality traits and SWB.",
"title": ""
},
{
"docid": "7878bdfb6fee9c519fe95f679b858917",
"text": "This letter studies and compares the physical-layer security approach and the covert communication approach for a wiretap channel with a transmitter, an intended receiver and an eavesdropper. In order to make the comparison, we investigate the power allocation problems for maximizing the secrecy/covert rate subject to the transmit power constraint. Simulation results illustrate that if the eavesdropper is not noisy or it is near to the transmitter, covert communication is more preferable.",
"title": ""
},
{
"docid": "b594a4fafc37a18773b1144dfdbb965d",
"text": "Deep generative modelling for robust human body analysis is an emerging problem with many interesting applications, since it enables analysis-by-synthesis and unsupervised learning. However, the latent space learned by such models is typically not human-interpretable, resulting in less flexible models. In this work, we adopt a structured semi-supervised variational auto-encoder approach and present a deep generative model for human body analysis where the pose and appearance are disentangled in the latent space, allowing for pose estimation. Such a disentanglement allows independent manipulation of pose and appearance and hence enables applications such as pose-transfer without being explicitly trained for such a task. In addition, the ability to train in a semi-supervised setting relaxes the need for labelled data. We demonstrate the merits of our generative model on the Human3.6M and ChictopiaPlus datasets.",
"title": ""
},
{
"docid": "a72c9eb8382d3c94aae77fa4eadd1df8",
"text": "Techniques for identifying the author of an unattributed document can be applied to problems in information analysis and in academic scholarship. A range of methods have been proposed in the research literature, using a variety of features and machine learning approaches, but the methods have been tested on very different data and the results cannot be compared. It is not even clear whether the differences in performance are due to feature selection or other variables. In this paper we examine the use of a large publicly available collection of newswire articles as a benchmark for comparing authorship attribution methods. To demonstrate the value of having a benchmark, we experimentally compare several recent feature-based techniques for authorship attribution, and test how well these methods perform as the volume of data is increased. We show that the benchmark is able to clearly distinguish between different approaches, and that the scalability of the best methods based on using function words features is acceptable, with only moderate decline as the difficulty of the problem is increased.",
"title": ""
},
{
"docid": "ce5fc5fbb3cb0fb6e65ca530bfc097b1",
"text": "The Bulgarian electricity market rules require from the transmission system operator, to procure electricity for covering transmission grid losses on hourly base before day-ahead gate closure. In this paper is presented a software solution for day-ahead forecasting of hourly transmission losses that is based on statistical approach of the impacting factors correlations and uses as inputs numerical weather predictions.",
"title": ""
},
{
"docid": "7bcdfd8830815fb55358d3102ac5b246",
"text": "Dependency parses are an effective way to inject linguistic knowledge into many downstream tasks, and many practitioners wish to efficiently parse sentences at scale. Recent advances in GPU hardware have enabled neural networks to achieve significant gains over the previous best models, these models still fail to leverage GPUs’ capability for massive parallelism due to their requirement of sequential processing of the sentence. In response, we propose Dilated Iterated Graph Convolutional Neural Networks (DIG-CNNs) for graphbased dependency parsing, a graph convolutional architecture that allows for efficient end-to-end GPU parsing. In experiments on the English Penn TreeBank benchmark, we show that DIG-CNNs perform on par with some of the best neural network parsers.",
"title": ""
},
{
"docid": "10947ff2f981ddf28934df8ac640208d",
"text": "The future of tropical forest biodiversity depends more than ever on the effective management of human-modified landscapes, presenting a daunting challenge to conservation practitioners and land use managers. We provide a critical synthesis of the scientific insights that guide our understanding of patterns and processes underpinning forest biodiversity in the human-modified tropics, and present a conceptual framework that integrates a broad range of social and ecological factors that define and contextualize the possible future of tropical forest species. A growing body of research demonstrates that spatial and temporal patterns of biodiversity are the dynamic product of interacting historical and contemporary human and ecological processes. These processes vary radically in their relative importance within and among regions, and have effects that may take years to become fully manifest. Interpreting biodiversity research findings is frequently made difficult by constrained study designs, low congruence in species responses to disturbance, shifting baselines and an over-dependence on comparative inferences from a small number of well studied localities. Spatial and temporal heterogeneity in the potential prospects for biodiversity conservation can be explained by regional differences in biotic vulnerability and anthropogenic legacies, an ever-tighter coupling of human-ecological systems and the influence of global environmental change. These differences provide both challenges and opportunities for biodiversity conservation. Building upon our synthesis we outline a simple adaptive-landscape planning framework that can help guide a new research agenda to enhance biodiversity conservation prospects in the human-modified tropics.",
"title": ""
},
{
"docid": "b83e537a2c8dcd24b096005ef0cb3897",
"text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.",
"title": ""
},
{
"docid": "7c8d1b0c77acb4fd6db6e7f887e66133",
"text": "Subdural hematomas (SDH) in infants often result from nonaccidental head injury (NAHI), which is diagnosed based on the absence of history of trauma and the presence of associated lesions. When these are lacking, the possibility of spontaneous SDH in infant (SSDHI) is raised, but this entity is hotly debated; in particular, the lack of positive diagnostic criteria has hampered its recognition. The role of arachnoidomegaly, idiopathic macrocephaly, and dehydration in the pathogenesis of SSDHI is also much discussed. We decided to analyze apparent cases of SSDHI from our prospective databank. We selected cases of SDH in infants without systemic disease, history of trauma, and suspicion of NAHI. All cases had fundoscopy and were evaluated for possible NAHI. Head growth curves were reconstructed in order to differentiate idiopathic from symptomatic macrocrania. Sixteen patients, 14 males and two females, were diagnosed with SSDHI. Twelve patients had idiopathic macrocrania, seven of these being previously diagnosed with arachnoidomegaly on imaging. Five had risk factors for dehydration, including two with severe enteritis. Two patients had mild or moderate retinal hemorrhage, considered not indicative of NAHI. Thirteen patients underwent cerebrospinal fluid drainage. The outcome was favorable in almost all cases; one child has sequels, which were attributable to obstetrical difficulties. SSDHI exists but is rare and cannot be diagnosed unless NAHI has been questioned thoroughly. The absence of traumatic features is not sufficient, and positive elements like macrocrania, arachnoidomegaly, or severe dehydration are necessary for the diagnosis of SSDHI.",
"title": ""
},
{
"docid": "bed9bdf4d4965610b85378f2fdbfab2a",
"text": "Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is n o established vocabulary, leading to confusion when comparing research efforts. The t e r m W e b mining has been used in two distinct ways. T h e first, called Web content mining in this paper, is the process of information discovery f rom sources across the World Wide Web. The second, called Web m a g e mining, is the process of mining f o r user browsing and access patterns. I n this paper we define W e b mining and present an overview of the various research issues, techniques, and development e f forts . W e briefly describe W E B M I N E R , a system for Web usage mining, and conclude this paper by listing research issues.",
"title": ""
},
{
"docid": "5490e43dc61771bd713f7916a1643aef",
"text": "This paper describes the integration of Amazon Alexa with the Talkamatic Dialogue Manager (TDM), and shows how flexible dialogue skills and rapid prototyping of dialogue apps can be brought to the Alexa platform. 1. Alexa Amazon’s Alexa 1 is a spoken dialogue interface open to third party developers who want to develop their own Alexa ”skills”. Alexa has received a lot of attention and has brought renewed interest to conversational interfaces. It has strong STT (Speech To Text), TTS (Text To Speech) and NLU (Natural Language Understanding) capabilities, but provides less support in the areas of dialogue management and generation, essentially leaving these tasks to the skill developer. See Figure 1 for an overview of the Alexa architecture. An Alexa Skill definition is more or less domain-specific. It also includes generation of natural language output, which makes it language specific. Leaving NLG to the Skill developer works fairly well when performing simple tasks but for domains demanding more complex conversational capabilities, development will be more challenging. Localizing skills to new languages will be another challenge especially if the languages is grammatically more cimplex than English. 2. TDM TDM (Talkamatic Dialogue Manager) [1, 2] is a Dialogue Manager with built-in multimodality, multilinguality, and multi-domain support, and an SDK enabling rapid development of conversational interfaces with a high degree of naturalness and usability. The basic principle behind TDM is separation of concerns – do not mix different kinds of knowledge. TDM keeps the following kinds of knowledge separated from each other: • Dialogue knowledge • Domain knowledge • General linguistic knowledge of a particular language • Domain-specific language • Integration to services and data Dialogue knowledge is encoded in the TDM DME (Dialogue Move Engine). Domain knowledge is declared in the DDD (see below). General linguistic knowledge is described in the Resource Grammar Library. Domain-specific language is described in the DDD-specific grammar. The Service and data integration is described by the Service Interface, a part of the DDD. 1https://developer.amazon.com/alexa 2www.talkamatic.se The dialogue knowledge encoded in TDM enables it to handle a host of dialogue behaviours, including but not limited to: • Overand other-answering (giving more or other information than requested) • Embedded subdialogues (multiple conversational threads) • Task recognition and clarification from incomplete user utterances • Grounding (verification) and correction TDM also supports localisation of applications to new languages (provided that STT and TTS is available). The currently supported and tested languages are English, Mandarin Chinese, Dutch and French. Support for more languages will be added in the future. 3. The relation Alexa – TDM We see the combination of TDM and Alexa as a perfect match. The strengths of the Alexa dialogue platform include the nicely integrated functionality for STT, NLU, and TTS, along with the integration with the Echo hardware. The strengths of TDM are centered on the Dialogue Management component and the multilingual generation. The strengths of the two platforms are thus complementary and non-overlapping. 4. TDM Alexa integration See Figure 2 for an overview of the Alexa-TDM integration. A wrapper around TDM receives intents (e.g. requests and questions) and slots (parameters) from Alexa, which are then translated to their TDM counterparts (request-, askand answermoves) and passed to TDM. The TDM DME (Dialogue Move Engine) then handles dialogue management (updating the information state based on observed dialogue moves, and selecting the best next system move) and the utterance generation (translating the system moves into text), which are then passed back to Alexa using the TDM wrapper. 5. Dialogue Domain Descriptions A TDM application (corresponding roughly to an Alexa skill) is defined by a DDD a Dialogue Domain Description. The DDD is a mostly declarative description of a particular dialogue subject. Firstly, it contains information about what information (basically intentions and slots) is available in a dialogue context, and how this information is related (dialogue plans). Secondly, it contains information about how users and the system speak about this information (grammar). Lastly it contains information about how the information in the dialogue is related to the real world (service interface). Copyright © 2017 ISCA INTERSPEECH 2017: Show & Tell Contribution August 20–24, 2017, Stockholm, Sweden",
"title": ""
},
{
"docid": "e51fe12eecec4116a9a3b7f4c2281938",
"text": "The use of wireless technologies in automation systems offers attractive benefits, but introduces a number of new technological challenges. The paper discusses these aspects for home and building automation applications. Relevant standards are surveyed. A wireless extension to KNX/EIB based on tunnelling over IEEE 802.15.4 is presented. The design emulates the properties of the KNX/EIB wired medium via wireless communication, allowing a seamless extension. Furthermore, it is geared towards zero-configuration and supports the easy integration of protocol security.",
"title": ""
},
{
"docid": "226cab96cff53614e2cf76e76001f168",
"text": "We introduce a new incremental 2-manifold surface reconstruction method. Compared to the previous works, its input is a sparse 3D point cloud estimated by a Structure-from-Motion (SfM) algorithm instead of a more common dense input. The main argument against such a method is that the lack of points implies an inaccurate scene surface. However, the advantages like point quality (thanks to the SfM machinery including bundle adjustment and interest point detection) and simplified resulting surface makes it worth of exploration. Our algorithm is incremental since the surface is locally updated for every new camera pose (and its 3D points) estimated by SfM. This is an advantage compared to global methods like [5] or [2] for applications which require a surface while reading the video sequence. Compared to [6], our method avoids prohibitive time complexity in presence of loops in the camera trajectory. Last but not least, unlike other incremental methods like [3] the output surface is a 2-manifold, i.e. it is a list of triangles in 3D such that the neighborhood of every surface point is topologically a disk. This property is needed to define surface normal and curvature [1] and thus is used by many mesh processing and computational geometry algorithms. Now we introduce notations. Let P be a set of 3D points on the unknown scene surface. The 3D Delaunay triangulation of P is a list T of tetrahedra which partition the convex hull of P. A list O of tetrahedra (O⊆ T ) represents the reconstructed object whose volume is |O|, the union of the O tetrahedra. Border δO is the list of triangles (tetrahedra faces) which are included in exactly one tetrahedra of O. The union of triangles |δO| is our target surface and should be a 2-manifold. SfM also provides visibility knowledge Ri: every point pi ∈ P is computed from camera locations c j where j ∈ Ri. This implies that |δO| should not intersect the rays (line segments) c jpi, j ∈ Ri except at pi. The tetrahedra intersected by a ray are labeled freespace, the others are matter. Let F be the set of free-space tetrahedra. In practice, |δF | is not a 2-manifold. One iteration of our algorithm is shown in Fig. 1. At image (or time) t +1, we have the following input",
"title": ""
},
{
"docid": "73abeef146be96d979a56a4794a5e130",
"text": "Regular path queries (RPQs) are a fundamental part of recent graph query languages like SPARQL and PGQL. They allow the definition of recursive path structures through regular expressions in a declarative pattern matching environment. We study the use of the K2-tree graph compression technique to materialize RPQ results with low memory consumption for indexing. Compact index representations enable the efficient storage of multiple indexes for varying RPQs.",
"title": ""
},
{
"docid": "4d99090b874776b89092f63f21c8ea93",
"text": "Object viewpoint classification aims at predicting an approximate 3D pose of objects in a scene and is receiving increasing attention. State-of-the-art approaches to viewpoint classification use generative models to capture relations between object parts. In this work we propose to use a mixture of holistic templates (e.g. HOG) and discriminative learning for joint viewpoint classification and category detection. Inspired by the work of Felzenszwalb et al 2009, we discriminatively train multiple components simultaneously for each object category. A large number of components are learned in the mixture and they are associated with canonical viewpoints of the object through different levels of supervision, being fully supervised, semi-supervised, or unsupervised. We show that discriminative learning is capable of producing mixture components that directly provide robust viewpoint classification, significantly outperforming the state of the art: we improve the viewpoint accuracy on the Savarese et al 3D Object database from 57% to 74%, and that on the VOC 2006 car database from 73% to 86%. In addition, the mixture-of-templates approach to object viewpoint/pose has a natural extension to the continuous case by discriminatively learning a linear appearance model locally at each discrete view. We evaluate continuous viewpoint estimation on a dataset of everyday objects collected using IMUs for groundtruth annotation: our mixture model shows great promise comparing to a number of baselines including discrete nearest neighbor and linear regression.",
"title": ""
}
] |
scidocsrr
|
700af11d69e36e5a57c0d41c1c96cead
|
Modeling Customer Lifetime Value in the Telecom Industry
|
[
{
"docid": "9b5224b94b448d5dabbd545aedd293f8",
"text": "the topic (a) has been dedicated to extolling its use as a decisionmaking criterion; (b) has presented isolated numerical examples of its calculation/determination; and (c) has considered it as part of the general discussions of profitability and discussed its role in customer acquisition decisions and customer acquisition/retention trade-offs. There has been a dearth of general modeling of the topic. This paper presents a series of mathematical models for determination of customer lifetime value. The choice of the models is based on a systematic theoretical taxonomy and on assumptions grounded in customer behavior. In NADA I. NASR is a doctoral student in Marketing at the School addition, selected managerial applications of these general models of of Management, Boston University. customer lifetime value are offered. 1998 John Wiley & Sons, Inc. and Direct Marketing Educational Foundation, Inc. CCC 1094-9968/98/010017-14",
"title": ""
}
] |
[
{
"docid": "dd14599e6a4d2e83a7a476471be53d13",
"text": "This paper presents the modeling, design, fabrication, and measurement of microelectromechanical systems-enabled continuously tunable evanescent-mode electromagnetic cavity resonators and filters with very high unloaded quality factors (Qu). Integrated electrostatically actuated thin diaphragms are used, for the first time, for tuning the frequency of the resonators/filters. An example tunable resonator with 2.6:1 (5.0-1.9 GHz) tuning ratio and Qu of 300-650 is presented. A continuously tunable two-pole filter from 3.04 to 4.71 GHz with 0.7% bandwidth and insertion loss of 3.55-2.38 dB is also shown as a technology demonstrator. Mechanical stability measurements show that the tunable resonators/filters exhibit very low frequency drift (less than 0.5% for 3 h) under constant bias voltage. This paper significantly expands upon previously reported tunable resonators.",
"title": ""
},
{
"docid": "8fccceb2757decb670eed84f4b2405a1",
"text": "This paper develops and evaluates search and optimization techniques for autotuning 3D stencil (nearest neighbor) computations on GPUs. Observations indicate that parameter tuning is necessary for heterogeneous GPUs to achieve optimal performance with respect to a search space. Our proposed framework takes a most concise specification of stencil behavior from the user as a single formula, autogenerates tunable code from it, systematically searches for the best configuration and generates the code with optimal parameter configurations for different GPUs. This autotuning approach guarantees adaptive performance for different generations of GPUs while greatly enhancing programmer productivity. Experimental results show that the delivered floating point performance is very close to previous handcrafted work and outperforms other autotuned stencil codes by a large margin. Furthermore, heterogeneous GPU clusters are shown to exhibit the highest performance for dissimilar tuning parameters leveraging proportional partitioning relative to single-GPU performance.",
"title": ""
},
{
"docid": "e902cdc8d2e06d7dd325f734b0a289b6",
"text": "Vaccinium arctostaphylos is a traditional medicinal plant in Iran used for the treatment of diabetes mellitus. In our search for antidiabetic compounds from natural sources, we found that the extract obtained from V. arctostaphylos berries showed an inhibitory effect on pancreatic alpha-amylase in vitro [IC50 = 1.91 (1.89-1.94) mg/mL]. The activity-guided purification of the extract led to the isolation of malvidin-3-O-beta-glucoside as an a-amylase inhibitor. The compound demonstrated a dose-dependent enzyme inihibitory activity [IC50 = 0.329 (0.316-0.342) mM].",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "773c132b708a605039d59de52a3cf308",
"text": "BACKGROUND\nAirSeal is a novel class of valve-free insufflation system that enables a stable pneumoperitoneum with continuous smoke evacuation and carbon dioxide (CO₂) recirculation during laparoscopic surgery. Comparison data to standard CO₂ pressure pneumoperitoneum insufflators is scarce. The aim of this study is to evaluate the potential advantages of AirSeal compared to a standard CO₂ insufflator.\n\n\nMETHODS/DESIGN\nThis is a single center randomized controlled trial comparing elective laparoscopic cholecystectomy, colorectal surgery and hernia repair with AirSeal (group A) versus a standard CO₂ pressure insufflator (group S). Patients are randomized using a web-based central randomization and registration system. Primary outcome measures will be operative time and level of postoperative shoulder pain by using the visual analog score (VAS). Secondary outcomes include the evaluation of immunological values through blood tests, anesthesiological parameters, surgical side effects and length of hospital stay. Taking into account an expected dropout rate of 5%, the total number of patients is 182 (n = 91 per group). All tests will be two-sided with a confidence level of 95% (P <0.05).\n\n\nDISCUSSION\nThe duration of an operation is an important factor in reducing the patient's exposure to CO₂ pneumoperitoneum and its adverse consequences. This trial will help to evaluate if the announced advantages of AirSeal, such as clear sight of the operative site and an exceptionally stable working environment, will facilitate the course of selected procedures and influence operation time and patients clinical outcome.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT01740011, registered 23 November 2012.",
"title": ""
},
{
"docid": "fea8bf3ca00b3440c2b34188876917a2",
"text": "Digitalization has been identified as one of the major trends changing society and business. Digitalization causes changes for companies due to the adoption of digital technologies in the organization or in the operation environment. This paper discusses digitalization from the viewpoint of diverse case studies carried out to collect data from several companies, and a literature study to complement the data. This paper describes the first version of the digital transformation model, derived from synthesis of these industrial cases, explaining a starting point for a systematic approach to tackle digital transformation. The model is aimed to help companies systematically handle the changes associated with digitalization. The model consists of four main steps, starting with positioning the company in digitalization and defining goals for the company, and then analyzing the company’s current state with respect to digitalization goals. Next, a roadmap for reaching the goals is defined and implemented in the company. These steps are iterative and can be repeated several times. Although company situations vary, these steps will help to systematically approach digitalization and to take the steps necessary to benefit from it.",
"title": ""
},
{
"docid": "f2f2b48cd35d42d7abc6936a56aa580d",
"text": "Complete enumeration of all the sequences to establish global optimality is not feasible as the search space, for a general job-shop scheduling problem, ΠG has an upper bound of (n!). Since the early fifties a great deal of research attention has been focused on solving ΠG, resulting in a wide variety of approaches such as Branch and Bound, Simulated Annealing, Tabu Search, etc. However limited success has been achieved by these methods due to the shear intractability of this generic scheduling problem. Recently, much effort has been concentrated on using neural networks to solve ΠG as they are capable of adapting to new environments with little human intervention and can mimic thought processes. Major contributions in solving ΠG using a Hopfield neural network, as well as applications of back-error propagation to general scheduling problems are presented. To overcome the deficiencies in these applications a modified back-error propagation model, a simple yet powerful parallel architecture which can be successfully simulated on a personal computer, is applied to solve ΠG.",
"title": ""
},
{
"docid": "4d3ed5dd5d4f08c9ddd6c9b8032a77fd",
"text": "The purpose of this study was to clarify the efficacy of stress radiography (stress X-P), ultrasonography (US), and magnetic resonance (MR) imaging in the detection of the anterior talofibular ligament (ATFL) injury. Thirty-four patients with ankle sprain were involved. In all patients, Stress X-P, US, MR imaging, and arthroscopy were performed. The arthroscopic results were considered to be the gold standard. The imaging results were compared with the arthroscopic results, and the accuracy calculated. Arthroscopic findings showed ATFL injury in 30 out of 34 cases. The diagnosis of ATFL injury with stress X-P, US, MR imaging were made with an accuracy of 67, 91 and 97%. US and MR imaging demonstrated the same location of the injury as arthroscopy in 63 and 93%. We have clarified the diagnostic value of stress X-P, US, and MR imaging in diagnosis of ATFL injury. We obtained satisfactory results with US and MR imaging.",
"title": ""
},
{
"docid": "5499d3f75391ec2a28dcc84d3a3c4410",
"text": "DRAM latency continues to be a critical bottleneck for system performance. In this work, we develop a low-cost mechanism, called ChargeCache, that enables faster access to recently-accessed rows in DRAM, with no modifications to DRAM chips. Our mechanism is based on the key observation that a recently-accessed row has more charge and thus the following access to the same row can be performed faster. To exploit this observation, we propose to track the addresses of recently-accessed rows in a table in the memory controller. If a later DRAM request hits in that table, the memory controller uses lower timing parameters, leading to reduced DRAM latency. Row addresses are removed from the table after a specified duration to ensure rows that have leaked too much charge are not accessed with lower latency. We evaluate ChargeCache on a wide variety of workloads and show that it provides significant performance and energy benefits for both single-core and multi-core systems.",
"title": ""
},
{
"docid": "49dd14500296da55b7ed34d96af30b13",
"text": "Deadly infections from opportunistic fungi have risen in frequency, largely because of the at-risk immunocompromised population created by advances in modern medicine and the HIV/AIDS pandemic. This review focuses on dynamics of the fungal polysaccharide cell wall, which plays an outsized role in fungal pathogenesis and therapy because it acts as both an environmental barrier and as the major interface with the host immune system. Human fungal pathogens use architectural strategies to mask epitopes from the host and prevent immune surveillance, and recent work elucidates how biotic and abiotic stresses present during infection can either block or enhance masking. The signaling components implicated in regulating fungal immune recognition can teach us how cell wall dynamics are controlled, and represent potential targets for interventions designed to boost or dampen immunity.",
"title": ""
},
{
"docid": "d0b2999de796ec3215513536023cc2be",
"text": "Recently proposed machine comprehension (MC) application is an effort to deal with natural language understanding problem. However, the small size of machine comprehension labeled data confines the application of deep neural networks architectures that have shown advantage in semantic inference tasks. Previous methods use a lot of NLP tools to extract linguistic features but only gain little improvement over simple baseline. In this paper, we build an attention-based recurrent neural network model, train it with the help of external knowledge which is semantically relevant to machine comprehension, and achieves a new state-of-the-art result.",
"title": ""
},
{
"docid": "a40e71e130f31450ce1e60d9cd4a96be",
"text": "Progering® is the only intravaginal ring intended for contraception therapies during lactation. It is made of silicone and releases progesterone through the vaginal walls. However, some drawbacks have been reported in the use of silicone. Therefore, ethylene vinyl acetate copolymer (EVA) was tested in order to replace it. EVA rings were produced by a hot-melt extrusion procedure. Swelling and degradation assays of these matrices were conducted in different mixtures of ethanol/water. Solubility and partition coefficient of progesterone were measured, together with the initial hormone load and characteristic dimensions. A mathematical model was used to design an EVA ring that releases the hormone at specific rate. An EVA ring releasing progesterone in vitro at about 12.05 ± 8.91 mg day−1 was successfully designed. This rate of release is similar to that observed for Progering®. In addition, it was observed that as the initial hormone load or ring dimension increases, the rate of release also increases. Also, the device lifetime was extended with a rise in the initial amount of hormone load. EVA rings could be designed to release progesterone in vitro at a rate of 12.05 ± 8.91 mg day−1. This ring would be used in contraception therapies during lactation. The use of EVA in this field could have initially several advantages: less initial and residual hormone content in rings, no need for additional steps of curing or crosslinking, less manufacturing time and costs, and the possibility to recycle the used rings.",
"title": ""
},
{
"docid": "6b1dd01c57f967e3caf83af9343099c5",
"text": "We have devised and implemented a novel computational strategy for de novo design of molecules with desired properties termed ReLeaSE (Reinforcement Learning for Structural Evolution). On the basis of deep and reinforcement learning (RL) approaches, ReLeaSE integrates two deep neural networks—generative and predictive—that are trained separately but are used jointly to generate novel targeted chemical libraries. ReLeaSE uses simple representation of molecules by their simplified molecular-input line-entry system (SMILES) strings only. Generative models are trained with a stack-augmented memory network to produce chemically feasible SMILES strings, and predictive models are derived to forecast the desired properties of the de novo–generated compounds. In the first phase of the method, generative and predictive models are trained separately with a supervised learning algorithm. In the second phase, both models are trained jointly with the RL approach to bias the generation of new chemical structures toward those with the desired physical and/or biological properties. In the proof-of-concept study, we have used the ReLeaSE method to design chemical libraries with a bias toward structural complexity or toward compounds with maximal, minimal, or specific range of physical properties, such as melting point or hydrophobicity, or toward compounds with inhibitory activity against Janus protein kinase 2. The approach proposed herein can find a general use for generating targeted chemical libraries of novel compounds optimized for either a single desired property or multiple properties.",
"title": ""
},
{
"docid": "f31a8b627e6a0143e70cf1526bf827fa",
"text": "D-amino acid oxidase (DAO) has been reported to be associated with schizophrenia. This study aimed to search for genetic variants associated with this gene. The genomic regions of all exons, highly conserved regions of introns, and promoters of this gene were sequenced. Potentially meaningful single-nucleotide polymorphisms (SNPs) obtained from direct sequencing were selected for genotyping in 600 controls and 912 patients with schizophrenia and in a replicated sample consisting of 388 patients with schizophrenia. Genetic associations were examined using single-locus and haplotype association analyses. In single-locus analyses, the frequency of the C allele of a novel SNP rs55944529 located at intron 8 was found to be significantly higher in the original large patient sample (p = 0.016). This allele was associated with a higher level of DAO mRNA expression in the Epstein-Barr virus-transformed lymphocytes. The haplotype distribution of a haplotype block composed of rs11114083-rs2070586-rs2070587-rs55944529 across intron 1 and intron 8 was significantly different between the patients and controls and the haplotype frequencies of AAGC were significantly higher in patients, in both the original (corrected p < 0.0001) and replicated samples (corrected p = 0.0003). The CGTC haplotype was specifically associated with the subgroup with deficits in sustained attention and executive function and the AAGC haplotype was associated with the subgroup without such deficits. The DAO gene was a susceptibility gene for schizophrenia and the genomic region between intron 1 and intron 8 may harbor functional genetic variants, which may influence the mRNA expression of DAO and neurocognitive functions in schizophrenia.",
"title": ""
},
{
"docid": "ca544972e6fe3c051f72d04608ff36c1",
"text": "The prefrontal cortex (PFC) plays a key role in controlling goal-directed behavior. Although a variety of task-related signals have been observed in the PFC, whether they are differentially encoded by various cell types remains unclear. Here we performed cellular-resolution microendoscopic Ca(2+) imaging from genetically defined cell types in the dorsomedial PFC of mice performing a PFC-dependent sensory discrimination task. We found that inhibitory interneurons of the same subtype were similar to each other, but different subtypes preferentially signaled different task-related events: somatostatin-positive neurons primarily signaled motor action (licking), vasoactive intestinal peptide-positive neurons responded strongly to action outcomes, whereas parvalbumin-positive neurons were less selective, responding to sensory cues, motor action, and trial outcomes. Compared to each interneuron subtype, pyramidal neurons showed much greater functional heterogeneity, and their responses varied across cortical layers. Such cell-type and laminar differences in neuronal functional properties may be crucial for local computation within the PFC microcircuit.",
"title": ""
},
{
"docid": "a941e1fb5a21fafa8e78269c4bd90637",
"text": "The penis is the male organ of copulation and is composed of erectile tissue that encases the extrapelvic portion of the urethra (Fig. 66-1). The penis of the horse is musculocavernous and can be divided into three parts: the root, the body or shaft, and the glans penis. The penis originates caudally at the root, which is fixed to the lateral aspects of the ischial arch by two crura (leg-like parts) that converge to form the shaft of the penis. The shaft constitutes the major portion of the penis and begins at the junction of the crura. It is attached caudally to the symphysis ischii of the pelvis by two short suspensory ligaments that merge with the origin of the gracilis muscles (Fig. 66-2). The glans penis is the conical enlargement that caps the shaft. The urethra passes over the ischial arch between the crura and curves cranioventrally to become incorporated within erectile tissue of the penis. The mobile shaft and glans penis extend cranioventrally to the umbilical region of the abdominal wall. The body is cylindrical but compressed laterally. When quiescent, the penis is soft, compressible, and about 50 cm long. Fifteen to 20 cm lie free in the prepuce. When maximally erect, the penis is up to three times longer than when it is in a quiescent state. Erectile Bodies",
"title": ""
},
{
"docid": "3b85d3eef49825e67f77769950b80800",
"text": "The phishing is a technique used by cyber-criminals to impersonate legitimate websites in order to obtain personal information. This paper presents a novel lightweight phishing detection approach completely based on the URL (uniform resource locator). The mentioned system produces a very satisfying recognition rate which is 95.80%. This system, is an SVM (support vector machine) tested on a 2000 records data-set consisting of 1000 legitimate and 1000 phishing URLs records. In the literature, several works tackled the phishing attack. However those systems are not optimal to smartphones and other embed devices because of their complex computing and their high battery usage. The proposed system uses only six URL features to perform the recognition. The mentioned features are the URL size, the number of hyphens, the number of dots, the number of numeric characters plus a discrete variable that correspond to the presence of an IP address in the URL and finally the similarity index. Proven by the results of this study the similarity index, the feature we introduce for the first time as input to the phishing detection systems improves the overall recognition rate by 21.8%.",
"title": ""
},
{
"docid": "13642d5d73a58a1336790f74a3f0eac7",
"text": "Fifty-eight patients received an Osteonics constrained acetabular implant for recurrent instability (46), girdlestone reimplant (8), correction of leg lengthening (3), and periprosthetic fracture (1). The constrained liner was inserted into a cementless shell (49), cemented into a pre-existing cementless shell (6), cemented into a cage (2), and cemented directly into the acetabular bone (1). Eight patients (13.8%) required reoperation for failure of the constrained implant. Type I failure (bone-prosthesis interface) occurred in 3 cases. Two cementless shells became loose, and in 1 patient, the constrained liner was cemented into an acetabular cage, which then failed by pivoting laterally about the superior fixation screws. Type II failure (liner locking mechanism) occurred in 2 cases. Type III failure (femoral head locking mechanism) occurred in 3 patients. Seven of the 8 failures occurred in patients with recurrent instability. Constrained liners are an effective method for treatment during revision total hip arthroplasty but should be used in select cases only.",
"title": ""
},
{
"docid": "9fa53682b83e925409ea115569494f70",
"text": "Circuit techniques for enabling a sub-0.9 V logic-compatible embedded DRAM (eDRAM) are presented. A boosted 3T gain cell utilizes Read Word-line (RWL) preferential boosting to increase read margin and improve data retention time. Read speed is enhanced with a hybrid current/voltage sense amplifier that allows the Read Bit-line (RBL) to remain close to VDD. A regulated bit-line write scheme for driving the Write Bit-line (WBL) is equipped with a steady-state storage node voltage monitor to overcome the data `1' write disturbance problem of the PMOS gain cell without introducing another boosted supply for the Write Word-line (WWL) over-drive. An adaptive and die-to-die adjustable read reference bias generator is proposed to cope with PVT variations. Monte Carlo simulations compare the 6-sigma read and write performance of proposed eDRAM against conventional designs. Measurement results from a 64 kb eDRAM test chip implemented in a 65 nm low-leakage CMOS process show a 1.25 ms data retention time with a 2 ns random cycle time at 0.9 V, 85°C, and a 91.3 μW per Mb static power dissipation at 1.0 V, 85°C.",
"title": ""
},
{
"docid": "c9d137a71c140337b3f8345efdac17ab",
"text": "For more than 30 years, many authors have attempted to synthesize the knowledge about how an enterprise should structure its business processes, the people that execute them, the Information Systems that support both of these and the IT layer on which such systems operate, in such a way that they will be aligned with the business strategy. This is the challenge of Enterprise Architecture design, which is the theme of this paper. We will provide a brief review of the literature on this subject, with an emphasis on more recent proposals and methods that have been applied in practice. We also select approaches that propose some sort of framework that provides a general Enterprise Architecture in a given domain that can be reused as a basis for specific designs in such a domain. Then we present our proposal for Enterprise Architecture design, which is based on general domain models that we call Enterprise Architecture Patterns.",
"title": ""
}
] |
scidocsrr
|
4cfc3605506ddb7b5283cd00eb17f4f1
|
LCM: Lightweight Communications and Marshalling
|
[
{
"docid": "2cc1373758f509c39275562f69b602c1",
"text": "This paper presents our solution for enabling a quadrotor helicopter to autonomously navigate unstructured and unknown indoor environments. We compare two sensor suites, specifically a laser rangefinder and a stereo camera. Laser and camera sensors are both well-suited for recovering the helicopter’s relative motion and velocity. Because they use different cues from the environment, each sensor has its own set of advantages and limitations that are complimentary to the other sensor. Our eventual goal is to integrate both sensors on-board a single helicopter platform, leading to the development of an autonomous helicopter system that is robust to generic indoor environmental conditions. In this paper, we present results in this direction, describing the key components for autonomous navigation using either of the two sensors separately.",
"title": ""
}
] |
[
{
"docid": "50e9cf4ff8265ce1567a9cc82d1dc937",
"text": "Thu, 06 Dec 2018 02:11:00 GMT bayesian reasoning and machine learning pdf Bayesian Reasoning and Machine Learning [David Barber] on Amazon.com. *FREE* shipping on qualifying offers. Machine learning methods extract value from vast data sets ... Thu, 06 Dec 2018 14:35:00 GMT Bayesian Reasoning and Machine Learning: David Barber ... A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of ... Sat, 08 Dec 2018 04:53:00 GMT Bayesian network Wikipedia Bayesian Reasoning and Machine Learning. The book is available in hardcopy from Cambridge University Press. The publishers have kindly agreed to allow the online ... Sun, 09 Dec 2018 20:51:00 GMT Bayesian Reasoning and Machine Learning, David Barber Machine learning (ML) is the study of algorithms and mathematical models that computer systems use to progressively improve their performance on a specific task. Mon, 10 Dec 2018 14:02:00 GMT Machine learning Wikipedia Your friends and colleagues are talking about something called \"Bayes' Theorem\" or \"Bayes' Rule\", or something called Bayesian reasoning. They sound really ... Mon, 10 Dec 2018 14:24:00 GMT Yudkowsky Bayes' Theorem NIPS 2016 Tutorial on ML Methods for Personalization with Application to Medicine. More here. UAI 2017 Tutorial on Machine Learning and Counterfactual Reasoning for ... Thu, 06 Dec 2018 15:33:00 GMT Suchi Saria – Machine Learning, Computational Health ... Gaussian Processes and Kernel Methods Gaussian processes are non-parametric distributions useful for doing Bayesian inference and learning on unknown functions. Mon, 10 Dec 2018 05:12:00 GMT Machine Learning Group Publications University of This practical introduction is geared towards scientists who wish to employ Bayesian networks for applied research using the BayesiaLab software platform. Sun, 09 Dec 2018 17:17:00 GMT Bayesian Networks & BayesiaLab: A Practical Introduction ... Automated Bitcoin Trading via Machine Learning Algorithms Isaac Madan Department of Computer Science Stanford University Stanford, CA 94305 [email protected] Tue, 27 Nov 2018 20:01:00 GMT Automated Bitcoin Trading via Machine Learning Algorithms 2.3. Naà ̄ve Bayesian classifier. A Naà ̄ve Bayesian classifier generally seems very simple; however, it is a pioneer in most information and computational applications ... Sun, 09 Dec 2018 03:48:00 GMT Proposed efficient algorithm to filter spam using machine ... Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning) [Kevin P. Murphy, Francis Bach] on Amazon.com. *FREE* shipping on qualifying ... Sun, 01 Jul 2018 19:30:00 GMT Machine Learning: A Probabilistic Perspective (Adaptive ... So itâ€TMs pretty clear by now that statistics and machine learning arenâ€TMt very different fields. I was recently pointed to a very amusing comparison by the ... Fri, 07 Dec 2018 19:56:00 GMT Statistics vs. Machine Learning, fight! | AI and Social ... Need help with Statistics for Machine Learning? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version ... Thu, 06 Dec 2018 23:39:00 GMT Statistics for Evaluating Machine Learning Models",
"title": ""
},
{
"docid": "7499f88de9d2f76008dc38e96b08ca0a",
"text": "Refractory and super-refractory status epilepticus (SE) are serious illnesses with a high risk of morbidity and even fatality. In the setting of refractory generalized convulsive SE (GCSE), there is ample justification to use continuous infusions of highly sedating medications—usually midazolam, pentobarbital, or propofol. Each of these medications has advantages and disadvantages, and the particulars of their use remain controversial. Continuous EEG monitoring is crucial in guiding the management of these critically ill patients: in diagnosis, in detecting relapse, and in adjusting medications. Forms of SE other than GCSE (and its continuation in a “subtle” or nonconvulsive form) should usually be treated far less aggressively, often with nonsedating anti-seizure drugs (ASDs). Management of “non-classic” NCSE in ICUs is very complicated and controversial, and some cases may require aggressive treatment. One of the largest problems in refractory SE (RSE) treatment is withdrawing coma-inducing drugs, as the prolonged ICU courses they prompt often lead to additional complications. In drug withdrawal after control of convulsive SE, nonsedating ASDs can assist; medical management is crucial; and some brief seizures may have to be tolerated. For the most refractory of cases, immunotherapy, ketamine, ketogenic diet, and focal surgery are among several newer or less standard treatments that can be considered. The morbidity and mortality of RSE is substantial, but many patients survive and even return to normal function, so RSE should be treated promptly and as aggressively as the individual patient and type of SE indicate.",
"title": ""
},
{
"docid": "86e16c911d9a381ca46225c65222177d",
"text": "Steep, soil-mantled hillslopes evolve through the downslope movement of soil, driven largely by slope-dependent ransport processes. Most landscape evolution models represent hillslope transport by linear diffusion, in which rates of sediment transport are proportional to slope, such that equilibrium hillslopes should have constant curvature between divides and channels. On many soil-mantled hillslopes, however, curvature appears to vary systematically, such that slopes are typically convex near the divide and become increasingly planar downslope. This suggests that linear diffusion is not an adequate model to describe the entire morphology of soil-mantled hillslopes. Here we show that the interaction between local disturbances (such as rainsplash and biogenic activity) and frictional and gravitational forces results in a diffusive transport law that depends nonlinearly on hillslope gradient. Our proposed transport law (1) approximates linear diffusion at low gradients and (2) indicates that sediment flux increases rapidly as gradient approaches a critical value. We calibrated and tested this transport law using high-resolution topographic data from the Oregon Coast Range. These data, obtained by airborne laser altimetry, allow us to characterize hillslope morphology at •2 m scale. At five small basins in our study area, hillslope curvature approaches zero with increasing gradient, consistent with our proposed nonlinear diffusive transport law. Hillslope gradients tend to cluster near values for which sediment flux increases rapidly with slope, such that large changes in erosion rate will correspond to small changes in gradient. Therefore average hillslope gradient is unlikely to be a reliable indicator of rates of tectonic forcing or baselevel owering. Where hillslope erosion is dominated by nonlinear diffusion, rates of tectonic forcing will be more reliably reflected in hillslope curvature near the divide rather than average hillslope gradient.",
"title": ""
},
{
"docid": "61096a0d1e94bb83f7bd067b06d69edd",
"text": "A main puzzle of deep neural networks (DNNs) revolves around the apparent absence of “overfitting”, defined in this paper as follows: the expected error does not get worse when increasing the number of neurons or of iterations of gradient descent. This is surprising because of the large capacity demonstrated by DNNs to fit randomly labeled data and the absence of explicit regularization. Recent results by Srebro et al. provide a satisfying solution of the puzzle for linear networks used in binary classification. They prove that minimization of loss functions such as the logistic, the cross-entropy and the exp-loss yields asymptotic, “slow” convergence to the maximum margin solution for linearly separable datasets, independently of the initial conditions. Here we prove a similar result for nonlinear multilayer DNNs near zero minima of the empirical loss. The result holds for exponential-type losses but not for the square loss. In particular, we prove that the normalized weight matrix at each layer of a deep network converges to a minimum norm solution (in the separable case). Our analysis of the dynamical system corresponding to gradient descent of a multilayer network suggests a simple criterion for predicting the generalization performance of different zero minimizers of the empirical loss. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. ar X iv :1 80 6. 11 37 9v 1 [ cs .L G ] 2 9 Ju n 20 18 Theory IIIb: Generalization in Deep Networks Tomaso Poggio ∗1, Qianli Liao1, Brando Miranda1, Andrzej Banburski1, Xavier Boix1, and Jack Hidary2 1Center for Brains, Minds and Machines, MIT 2Alphabet (Google) X",
"title": ""
},
{
"docid": "21ac4dac4ddbdfd271e6f546405fb3f7",
"text": "This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep VGG16 network 9x faster than R-CNN, is 213x faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3x faster, tests 10x faster, and is more accurate. Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at https://github.com/rbgirshick/fast-rcnn.",
"title": ""
},
{
"docid": "4d449388969075c56b921f9183fbc7b5",
"text": "Tasks such as question answering and semantic search are dependent on the ability of querying & reasoning over large-scale commonsense knowledge bases (KBs). However, dealing with commonsense data demands coping with problems such as the increase in schema complexity, semantic inconsistency, incompleteness and scalability. This paper proposes a selective graph navigation mechanism based on a distributional relational semantic model which can be applied to querying & reasoning over heterogeneous knowledge bases (KBs). The approach can be used for approximative reasoning, querying and associational knowledge discovery. In this paper we focus on commonsense reasoning as the main motivational scenario for the approach. The approach focuses on addressing the following problems: (i) providing a semantic selection mechanism for facts which are relevant and meaningful in a specific reasoning & querying context and (ii) allowing coping with information incompleteness in large KBs. The approach is evaluated using ConceptNet as a commonsense KB, and achieved high selectivity, high scalability and high accuracy in the selection of meaningful navigational paths. Distributional semantics is also used as a principled mechanism to cope with information incompleteness.",
"title": ""
},
{
"docid": "01d441a277e9f9cbf6af40d0d526d44f",
"text": "On-orbit fabrication of spacecraft components can enable space programs to escape the volumetric limitations of launch shrouds and create systems with extremely large apertures and very long baselines in order to deliver higher resolution, higher bandwidth, and higher SNR data. This paper will present results of efforts to investigated the value proposition and technical feasibility of adapting several of the many rapidly-evolving additive manufacturing and robotics technologies to the purpose of enabling space systems to fabricate and integrate significant parts of themselves on-orbit. We will first discuss several case studies for the value proposition for on-orbit fabrication of space structures, including one for a starshade designed to enhance the capabilities for optical imaging of exoplanets by the proposed New World Observer mission, and a second for a long-baseline phased array radar system. We will then summarize recent work adapting and evolving additive manufacturing techniques and robotic assembly technologies to enable automated on-orbit fabrication of large, complex, three-dimensional structures such as trusses, antenna reflectors, and shrouds.",
"title": ""
},
{
"docid": "f33ca4cfba0aab107eb8bd6d3d041b74",
"text": "Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temorary data structures differs significantly. Convolution of an input matrix with dimensions C × H × W , requires O(KCHW ) additional space using the classical im2col approach. More recently memory-efficient approaches requiring just O(KCHW ) auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just O(MHW ) and O(KW ) additional space respectively, where M is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our lowmemory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads.",
"title": ""
},
{
"docid": "a6773662bc858664d95e3df315d11f6c",
"text": "In this paper, we examine the strength of deep learning technique for diagnosing lung cancer on medical image analysis problem. Convolutional neural networks (CNNs) models become popular among the pattern recognition and computer vision research area because of their promising outcome on generating high-level image representations. We propose a new deep learning architecture for learning high-level image representation to achieve high classification accuracy with low variance in medical image binary classification tasks. We aim to learn discriminant compact features at beginning of our deep convolutional neural network. We evaluate our model on Kaggle Data Science Bowl 2017 (KDSB17) data set, and compare it with some related works proposed in the Kaggle competition.",
"title": ""
},
{
"docid": "fb80a9ad20947bee7ba23d585896b6e8",
"text": "This paper presents an intelligent streetlight management system based on LED lamps, designed to facilitate its deployment in existing facilities. The proposed approach, which is based on wireless communication technologies, will minimize the cost of investment of traditional wired systems, which always need civil engineering for burying of cable underground and consequently are more expensive than if the connection of the different nodes is made over the air. The deployed solution will be aware of their surrounding's environmental conditions, a fact that will be approached for the system intelligence in order to learn, and later, apply dynamic rules. The knowledge of real time illumination needs, in terms of instant use of the street in which it is installed, will also feed our system, with the objective of providing tangible solutions to reduce energy consumption according to the contextual needs, an exact calculation of energy consumption and reliable mechanisms for preventive maintenance of facilities.",
"title": ""
},
{
"docid": "c14c575eed397c522a3bc0d2b766a836",
"text": "Being highly unsaturated, carotenoids are susceptible to isomerization and oxidation during processing and storage of foods. Isomerization of trans-carotenoids to cis-carotenoids, promoted by contact with acids, heat treatment and exposure to light, diminishes the color and the vitamin A activity of carotenoids. The major cause of carotenoid loss, however, is enzymatic and non-enzymatic oxidation, which depends on the availability of oxygen and the carotenoid structure. It is stimulated by light, heat, some metals, enzymes and peroxides and is inhibited by antioxidants. Data on percentage losses of carotenoids during food processing and storage are somewhat conflicting, but carotenoid degradation is known to increase with the destruction of the food cellular structure, increase of surface area or porosity, length and severity of the processing conditions, storage time and temperature, transmission of light and permeability to O2 of the packaging. Contrary to lipid oxidation, for which the mechanism is well established, the oxidation of carotenoids is not well understood. It involves initially epoxidation, formation of apocarotenoids and hydroxylation. Subsequent fragmentations presumably result in a series of compounds of low molecular masses. Completely losing its color and biological activities, the carotenoids give rise to volatile compounds which contribute to the aroma/flavor, desirable in tea and wine and undesirable in dehydrated carrot. Processing can also influence the bioavailability of carotenoids, a topic that is currently of great interest.",
"title": ""
},
{
"docid": "2ff15076533d1065209e0e62776eaa69",
"text": "In less than a decade, Cubesats have evolved from purely educational tools to a standard platform for technology demonstration and scientific instrumentation. The use of COTS (Commercial-Off-The-Shelf) components and the ongoing miniaturization of several technologies have already led to scattered instances of missions with promising scientific value. Furthermore, advantages in terms of development cost and development time with respect to larger satellites, as well as the possibility of launching several dozens of Cubesats with a single rocket launch, have brought forth the potential for radically new mission architectures consisting of very large constellations or clusters of Cubesats. These architectures promise to combine the temporal resolution of GEO missions with the spatial resolution of LEO missions, thus breaking a traditional tradeoff in Earth observation mission design. This paper assesses the current capabilities of Cubesats with respect to potential employment in Earth observation missions. A thorough review of Cubesat bus technology capabilities is performed, identifying potential limitations and their implications on 17 different Earth observation payload technologies. These results are matched to an exhaustive review of scientific requirements in the field of Earth observation, assessing the possibilities of Cubesats to cope with the requirements set for each one of 21 measurement categories. Based on this review, several Earth observation measurements are identified that can potentially be compatible with the current state-of-the-art of Cubesat technology although some of them have actually never been addressed by any Cubesat mission. Simultaneously, other measurements are identified which are unlikely to be performed by Cubesats in the next few years due to insuperable constraints. Ultimately, this paper is intended to supply a box of ideas for universities to design future Cubesat missions with high",
"title": ""
},
{
"docid": "0d644ca204280bf3f7bf4ea5e4cb8886",
"text": "Accurate rainfall forecasting is critical because it has a great impact on people’s social and economic activities. Recent trends on various literatures shows that Deep Learning (Neural Network) is a promising methodology to tackle many challenging tasks. In this study, we introduce a brand-new data-driven precipitation prediction model called DeepRain. This model predicts the amount of rainfall from weather radar data, which is three-dimensional and four-channel data, using convolutional LSTM (ConvLSTM). ConvLSTM is a variant of LSTM (Long Short-Term Memory) containing a convolution operation inside the LSTM cell. For the experiment, we used radar reflectivity data for a twoyear period whose input is in a time series format in units of 6 min divided into 15 records. The output is the predicted rainfall information for the input data. Experimental results show that two-stacked ConvLSTM reduced RMSE by 23.0% compared to linear regression.",
"title": ""
},
{
"docid": "20f3b5b42f33056276c44fe4b2f655d2",
"text": "We explore unsupervised representation learning of radio communication signals in raw sampled time series representation. We demonstrate that we can learn modulation basis functions using convolutional autoencoders and visually recognize their relationship to the analytic bases used in digital communications. We also propose and evaluate quantitative metrics for quality of encoding using domain relevant performance metrics.",
"title": ""
},
{
"docid": "c904e36191df6989a5f38a52bc206342",
"text": "In present paper we proposed a simple and effective method to compress an image. Here we found success in size reduction of an image without much compromising with it’s quality. Here we used Haar Wavelet Transform to transform our original image and after quantization and thresholding of DWT coefficients Run length coding and Huffman coding schemes have been used to encode the image. DWT is base for quite populate JPEG 2000 technique. Keywords—lossy compression, DWT, quantization, Run length coding, Huffman coding, JPEG2000",
"title": ""
},
{
"docid": "dcf9cba8bf8e2cc3f175e63e235f6b81",
"text": "Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped internet of things (IoT) devices permeate into every aspect of modern life, it is increasingly important to run CNN inference, a computationally intensive application, on resource constrained devices. We present a technique for fast and energy-efficient CNN inference on mobile SoC platforms, which are projected to be a major player in the IoT space. We propose techniques for efficient parallelization of CNN inference targeting mobile GPUs, and explore the underlying tradeoffs. Experiments with running Squeezenet on three different mobile devices confirm the effectiveness of our approach. For further study, please refer to the project repository available on our GitHub page: https://github.com/mtmd/Mobile ConvNet.",
"title": ""
},
{
"docid": "687eedaec4f9f65834f3772a56728834",
"text": "Emoji have grown to become one of the most important forms of communication on the web. With its widespread use, measuring the similarity of emoji has become an important problem for contemporary text processing since it lies at the heart of sentiment analysis, search, and interface design tasks. This paper presents a comprehensive analysis of the semantic similarity of emoji through embedding models that are learned over machine-readable emoji meanings in the EmojiNet knowledge base. Using emoji descriptions, emoji sense labels and emoji sense definitions, and with different training corpora obtained from Twitter and Google News, we develop and test multiple embedding models to measure emoji similarity. To evaluate our work, we create a new dataset called EmoSim508, which assigns human-annotated semantic similarity scores to a set of 508 carefully selected emoji pairs. After validation with EmoSim508, we present a real-world use-case of our emoji embedding models using a sentiment analysis task and show that our models outperform the previous best-performing emoji embedding model on this task. The EmoSim508 dataset and our emoji embedding models are publicly released with this paper and can be downloaded from http://emojinet.knoesis.org/.",
"title": ""
},
{
"docid": "d2a58cf8a92d2a1e0ed1b0aef4396dd2",
"text": "The recent explosion in the adoption of search engines and new media such as blogs and Twitter have facilitated the faster propagation of news and rumors. How quickly does a piece of news spread over these media? How does its popularity diminish over time? Does the rising and falling pattern follow a simple universal law? In this article, we propose SpikeM, a concise yet flexible analytical model of the rise and fall patterns of information diffusion. Our model has the following advantages. First, unification power: it explains earlier empirical observations and generalizes theoretical models including the SI and SIR models. We provide the threshold of the take-off versus die-out conditions for SpikeM and discuss the generality of our model by applying it to an arbitrary graph topology. Second, practicality: it matches the observed behavior of diverse sets of real data. Third, parsimony: it requires only a handful of parameters. Fourth, usefulness: it makes it possible to perform analytic tasks such as forecasting, spotting anomalies, and interpretation by reverse engineering the system parameters of interest (quality of news, number of interested bloggers, etc.). We also introduce an efficient and effective algorithm for the real-time monitoring of information diffusion, namely SpikeStream, which identifies multiple diffusion patterns in a large collection of online event streams. Extensive experiments on real datasets demonstrate that SpikeM accurately and succinctly describes all patterns of the rise and fall spikes in social networks.",
"title": ""
},
{
"docid": "18039aed493fedb6d931661a829ea824",
"text": "As two important operations in data cleaning, similarity join and similarity search have attracted much attention recently. Existing methods to support similarity join usually adopt a prefix-filtering-based framework. They select a prefix of each object and prune object pairs whose prefixes have no overlap. We have an observation that prefix lengths have significant effect on the performance. Different prefix lengths lead to significantly different performance, and prefix filtering does not always achieve high performance. To address this problem, in this paper we propose an adaptive framework to support similarity join. We propose a cost model to judiciously select an appropriate prefix for each object. To efficiently select prefixes, we devise effective indexes. We extend our method to support similarity search. Experimental results show that our framework beats the prefix-filtering-based framework and achieves high efficiency.",
"title": ""
}
] |
scidocsrr
|
f374f0cd5466e6448b0cb31949c956b3
|
Application of machine learning techniques to sentiment analysis
|
[
{
"docid": "5ff263cf4a73c202741c46d5582a960a",
"text": "Sentiment analysis; Sentiment classification; Feature selection; Emotion detection; Transfer learning; Building resources Abstract Sentiment Analysis (SA) is an ongoing field of research in text mining field. SA is the computational treatment of opinions, sentiments and subjectivity of text. This survey paper tackles a comprehensive overview of the last update in this field. Many recently proposed algorithms’ enhancements and various SA applications are investigated and presented briefly in this survey. These articles are categorized according to their contributions in the various SA techniques. The related fields to SA (transfer learning, emotion detection, and building resources) that attracted researchers recently are discussed. The main target of this survey is to give nearly full image of SA techniques and the related fields with brief details. The main contributions of this paper include the sophisticated categorizations of a large number of recent articles and the illustration of the recent trend of research in the sentiment analysis and its related areas. 2014 Production and hosting by Elsevier B.V. on behalf of Ain Shams University.",
"title": ""
},
{
"docid": "8f9c8d90d3b207958c9b7e54a95c1093",
"text": "The paper gives an overview of the different sentiment classification approaches and tools used for sentiment analysis. Starting from this overview the paper provides a classification of (i) approaches with respect to features/techniques and advantages/limitations and (ii) tools with respect to the different techniques used for sentiment analysis. Different application fields of application of sentiment analysis such as: business, politic, public actions and finance are also discussed in the paper.",
"title": ""
}
] |
[
{
"docid": "aae7c62819cb70e21914486ade94a762",
"text": "From failure experience on power transformers very often it was suspected that inrush currents, occurring when energizing unloaded transformers, were the reason for damage. In this paper it was investigated how mechanical forces within the transformer coils build up under inrush compared to those occurring at short circuit. 2D and 3D computer modeling for a real 268 MVA, 525/17.75 kV three-legged step up transformer were employed. The results show that inrush current peaks of 70% of the rated short circuit current cause local forces in the same order of magnitude as those at short circuit. The resulting force summed up over the high voltage coil is even three times higher. Although inrush currents are normally smaller, the forces can have similar amplitudes as those at short circuit, with longer exposure time, however. Therefore, care has to be taken to avoid such high inrush currents. Today controlled switching offers an elegant and practical solution.",
"title": ""
},
{
"docid": "4ac69ffb880cea60dac3b24b55c9c083",
"text": "Patterns of Intelligent and Mobile Agents Elizabeth A.Kendall, P.V. Murali Krishna, Chirag V. Pathak, C:B. Suresh Computer Systems Engineering, Royal Melbourne Institute Of Technology City Campus, GPO Box 2476V, Melbourne, VIC 3001 AUSTRALIA email : [email protected] 1. ABSTRACT Agent systems must have foundation; one approach that successfully applied to other so&are is patterns. This paper collection of patterns for agents. 2. MOTIVATION Almost all agent development to date has a strong has been kinds of presents a",
"title": ""
},
{
"docid": "a91ba04903c584a1165867c7215385d0",
"text": "The INLA approach for approximate Bayesian inference for latent Gaussian models has been shown to give fast and accurate estimates of posterior marginals and also to be a valuable tool in practice via the R-package R-INLA. In this paper we formalize new developments in the R-INLA package and show how these features greatly extend the scope of models that can be analyzed by this interface. We also discuss the current default method in R-INLA to approximate posterior marginals of the hyperparameters using only a modest number of evaluations of the joint posterior distribution of the hyperparameters, without any need for numerical integration.",
"title": ""
},
{
"docid": "8378b870612c37f581a8ad444e2a6424",
"text": "This paper shows that the performance of a binary classifier can be significantly improved by the processing of structured unlabeled data, i.e. data are structured if knowing the label of one example restricts the labeling of the others. We propose a novel paradigm for training a binary classifier from labeled and unlabeled examples that we call P-N learning. The learning process is guided by positive (P) and negative (N) constraints which restrict the labeling of the unlabeled set. P-N learning evaluates the classifier on the unlabeled data, identifies examples that have been classified in contradiction with structural constraints and augments the training set with the corrected samples in an iterative process. We propose a theory that formulates the conditions under which P-N learning guarantees improvement of the initial classifier and validate it on synthetic and real data. P-N learning is applied to the problem of on-line learning of object detector during tracking. We show that an accurate object detector can be learned from a single example and an unlabeled video sequence where the object may occur. The algorithm is compared with related approaches and state-of-the-art is achieved on a variety of objects (faces, pedestrians, cars, motorbikes and animals).",
"title": ""
},
{
"docid": "62938eb6d3b523affbe0b7eb72b423ca",
"text": "Principal component analysis (PCA) is a mainstay of modern data analysis a black box that is widely used but poorly understood. The goal of this paper is to dispel the magic behind this black box. This tutorial focuses on building a solid intuition for how and why principal component analysis works; furthermore, it crystallizes this knowledge by deriving from simple intuitions, the mathematics behind PCA . This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of PCA as well as the when, the how and the why of applying this technique.",
"title": ""
},
{
"docid": "b825426604420620e1bba43c0f45115e",
"text": "Taxonomies are the backbone of many structured, semantic knowledge resources. Recent works for extracting taxonomic relations from text focused on collecting lexical-syntactic patterns to extract the taxonomic relations by matching the patterns to text. These approaches, however, often show low coverage due to the lack of contextual analysis across sentences. To address this issue, we propose a novel approach that collectively utilizes contextual information of terms in syntactic structures such that if the set of contexts of a term includes most of contexts of another term, a subsumption relation between the two terms is inferred. We apply this method to the task of taxonomy construction from scratch, where we introduce another novel graph-based algorithm for taxonomic structure induction. Our experiment results show that the proposed method is well complementary with previous methods of linguistic pattern matching and significantly improves recall and thus F-measure.",
"title": ""
},
{
"docid": "79ab8ce5cb15b8eafd898edc8eb228aa",
"text": "OBJECTIVES\nGlycyrrhizin is the main water-soluble constituent of the root of liquorice (Glycyrrhiza glabra). The study investigates the effect of glycyrrhizin on streptozotocin (STZ)-induced diabetic changes and associated oxidative stress, including haemoglobin-induced free iron-mediated oxidative reactions.\n\n\nMETHODS\nMale Wistar rats were grouped as normal control, STZ-induced diabetic control, normal treated with glycyrrhizin, diabetic treated with glycyrrhizin and diabetic treated with a standard anti-hyperglycaemic drug, glibenclamide. Different parameters were studied in blood and tissue samples of the rats.\n\n\nKEY FINDINGS\nGlycyrrhizin treatment improved significantly the diabetogenic effects of STZ, namely enhanced blood glucose level, glucose intolerant behaviour, decreased serum insulin level including pancreatic islet cell numbers, increased glycohaemoglobin level and enhanced levels of cholesterol and triglyceride. The treatment significantly reduced diabetes-induced abnormalities of pancreas and kidney tissues. Oxidative stress parameters, namely, serum superoxide dismutase, catalase, malondialdehyde and fructosamine in diabetic rats were reverted to respective normal values after glycyrrhizin administration. Free iron in haemoglobin, iron-mediated free radical reactions and carbonyl formation in haemoglobin were pronounced in diabetes, and were counteracted by glycyrrhizin. Effects of glycyrrhizin and glibenclamide treatments appeared comparable.\n\n\nCONCLUSION\nGlycyrrhizin is quite effective against hyperglycaemia, hyperlipidaemia and associated oxidative stress, and may be a potential therapeutic agent for diabetes treatment.",
"title": ""
},
{
"docid": "3e605aff5b2ceae91ee0cef42dd36528",
"text": "A new super-concentrated aqueous electrolyte is proposed by introducing a second lithium salt. The resultant ultra-high concentration of 28 m led to more effective formation of a protective interphase on the anode along with further suppression of water activities at both anode and cathode surfaces. The improved electrochemical stability allows the use of TiO2 as the anode material, and a 2.5 V aqueous Li-ion cell based on LiMn2 O4 and carbon-coated TiO2 delivered the unprecedented energy density of 100 Wh kg(-1) for rechargeable aqueous Li-ion cells, along with excellent cycling stability and high coulombic efficiency. It has been demonstrated that the introduction of a second salts into the \"water-in-salt\" electrolyte further pushed the energy densities of aqueous Li-ion cells closer to those of the state-of-the-art Li-ion batteries.",
"title": ""
},
{
"docid": "cebc36cd572740069ab22e8181c405c4",
"text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.",
"title": ""
},
{
"docid": "ea0952674e4fbf5e5c5d3738cc4a6ae1",
"text": "Continual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills. The lack of consensus in evaluating continual learning algorithms and the almost exclusive focus on forgetting motivate us to propose a more comprehensive set of implementation independent metrics accounting for several factors we believe have practical implications worth considering in the deployment of real AI systems that learn continually: accuracy or performance over time, backward and forward knowledge transfer, memory overhead as well as computational efficiency. Drawing inspiration from the standard Multi-Attribute Value Theory (MAVT) we further propose to fuse these metrics into a single score for ranking purposes and we evaluate our proposal with five continual learning strategies on the iCIFAR-100 continual learning benchmark.",
"title": ""
},
{
"docid": "d388e381e918ba764b4c1805fa7551fc",
"text": "In this paper a new protection scheme for DC traction supply system is introduced, which is known as “overload protection method”. In this scheme, with the knowledge of the number of traveling trains between two traction substations and the value of current which is drawn from each substation, the occurrence of short circuit fault is detected. The aforementioned data can be extracted and transmitted from railway traffic control system. Recently DDL (“Détection Défaut Lign in French”, which means “Line Fault detection”) protection method is used in supply line protection. In this paper, the electrical system of railway system is simulated using the data obtained from Tabriz (located in Iran) Urban Railway Organization. The performance of the conventional and proposed protection schemes is compared and the simulation results are presented and then the practical measures and requirements of both methods are investigated. According to the results obtained, both methods accomplish satisfactory protection performance; however the DDL protection scheme is severely sensitive to the change in components and supply system parameters and it is also hard to determine the setting range of protection parameters of this method. Therefore, it may lead to some undesired operations; while the proposed protection scheme is simpler and more reliable.",
"title": ""
},
{
"docid": "04ed69959c28c3c4185d3af55521d864",
"text": "A new differential-fed broadband antenna element with unidirectional radiation is proposed. This antenna is composed of a folded bowtie, a center-fed loop, and a box-shaped reflector. A pair of differential feeds is developed to excite the antenna and provide an ultrawideband (UWB) impedance matching. The box-shaped reflector is used for the reduction of the gain fluctuation across the operating frequency band. An antenna prototype for UWB applications is fabricated and measured, exhibiting an impedance bandwidth of 132% with standing wave ratio ≤ 2 from 2.48 to 12.12 GHz, over which the gain varies between 7.2 and 14.1 dBi at boresight. The proposed antenna radiates unidirectionally with low cross polarization and low back radiation. Furthermore, the time-domain characteristic of the proposed antenna is evaluated. In addition, a 2 × 2 element array using the proposed element is also investigated in this communication.",
"title": ""
},
{
"docid": "48427804f2e704ab6ea15251c624cdf2",
"text": "In this work, we propose Residual Attention Network, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.",
"title": ""
},
{
"docid": "a2346bc58039ef6f5eb710804e87359d",
"text": "This work presents a deep object co-segmentation (DOCS) approach for segmenting common objects of the same class within a pair of images. This means that the method learns to ignore common, or uncommon, background stuff and focuses on common objects. If multiple object classes are presented in the image pair, they are jointly extracted as foreground. To address this task, we propose a CNN-based Siamese encoder-decoder architecture. The encoder extracts high-level semantic features of the foreground objects, a mutual correlation layer detects the common objects, and finally, the decoder generates the output foreground masks for each image. To train our model, we compile a large object co-segmentation dataset consisting of image pairs from the PASCAL dataset with common objects masks. We evaluate our approach on commonly used datasets for co-segmentation tasks and observe that our approach consistently outperforms competing methods, for both seen and unseen object classes.",
"title": ""
},
{
"docid": "f329009bbee172c495a441a0ab911e28",
"text": "This paper provides an application of game theoretic techniques to the analysis of a class of multiparty cryptographic protocols for secret bit exchange.",
"title": ""
},
{
"docid": "9c16bf2fb7ceba2bf872ca3d1475c6d9",
"text": "Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics.Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.",
"title": ""
},
{
"docid": "53e17cdc263fcb68abc3b25bb51d1888",
"text": "This paper describes a qualitative approach to analysing students’ concept maps. The classi cation highlights three major patterns which are referred to as ‘spoke’, ‘chain’ and ‘net’ structures. Examples are given from Year 8 science classes. The patterns are interpreted as being indicators of progressive levels of understanding. It is proposed that identi cation of these differences may help the classroom teacher to focus teaching for more effective learning and may be used as a basis for structuring groups in collaborative settings. This approach to analysing concept maps is of value because it suggests teaching approaches that help students integrate new knowledge and build upon their existing naïve concepts. We also refer to the teacher’s scheme of work and to the National Curriculum for science in order to consider their in uence in the construction of understanding. These ideas have been deliberately offered for early publication to encourage debate and generate feedback. Further work is in progress to better understand how students with different conceptual structures can be most appropriately helped to achieve learning development.",
"title": ""
},
{
"docid": "242b854de904075d04e7044e680dc281",
"text": "Adopting a motivational perspective on adolescent development, these two companion studies examined the longitudinal relations between early adolescents' school motivation (competence beliefs and values), achievement, emotional functioning (depressive symptoms and anger), and middle school perceptions using both variable- and person-centered analytic techniques. Data were collected from 1041 adolescents and their parents at the beginning of seventh and the end of eight grade in middle school. Controlling for demographic factors, regression analyses in Study 1 showed reciprocal relations between school motivation and positive emotional functioning over time. Furthermore, adolescents' perceptions of the middle school learning environment (support for competence and autonomy, quality of relationships with teachers) predicted their eighth grade motivation, achievement, and emotional functioning after accounting for demographic and prior adjustment measures. Cluster analyses in Study 2 revealed several different patterns of school functioning and emotional functioning during seventh grade that were stable over 2 years and that were predictably related to adolescents' reports of their middle school environment. Discussion focuses on the developmental significance of schooling for multiple adjustment outcomes during adolescence.",
"title": ""
},
{
"docid": "70e8ac3c9d948310bd02746d56090ed0",
"text": "Much recent attention, both experimental and theoretical, has been focussed on classii-cation algorithms which produce voted combinations of classiiers. Recent theoretical work has shown that the impressive generalization performance of algorithms like AdaBoost can be attributed to the classiier having large margins on the training data. We present abstract algorithms for nding linear and convex combinations of functions that minimize arbitrary cost functionals (i.e functionals that do not necessarily depend on the margin). Many existing voting methods can be shown to be special cases of these abstract algorithms. Then, following previous theoretical results bounding the generalization performance of convex combinations of classiiers in terms of general cost functions of the margin, we present a new algorithm (DOOM II) for performing a gradient descent optimization of such cost functions. Experiments on several data sets from the UC Irvine repository demonstrate that DOOM II generally outperforms AdaBoost, especially in high noise situations. Margin distribution plots verify that DOOM II is willing tògive up' on examples that are too hard in order to avoid overrtting. We also show that the overrtting behavior exhibited by AdaBoost can be quantiied in terms of our proposed cost function.",
"title": ""
},
{
"docid": "87a10975f7020c6c52530a7285bc28ec",
"text": "We investigated the influence of sex and puberty stage on circadian urine production and levels of antidiuretic hormone [arginine vasopressin (AVP)] in healthy children. Thirty-nine volunteers (9 prepuberty boys, 10 prepuberty girls, 10 midpuberty boys, and 10 midpuberty girls) were included. All participants underwent a 24-h circadian inpatient study under standardized conditions regarding Na(+) and fluid intake. Blood samples were drawn every 4 h for measurements of plasma AVP, serum 17-β-estradiol, and testosterone, and urine was fractionally collected for measurements of electrolytes, aquaporin (AQP)2, and PGE2. We found a marked nighttime decrease in diuresis (from 1.69 ± 0.08 to 0.86 ± 0.06 ml·kg(-1)·h(-1), P < 0.001) caused by a significant nighttime increase in solute-free water reabsorption (TcH2O; day-to-night ratio: 0.64 ± 0.07, P < 0.001) concurrent with a significant decrease in osmotic excretion (day-to-night ratio: 1.23 ± 0.06, P < 0.001). Plasma AVP expressed a circadian rhythm (P < 0.01) with a nighttime increase and peak levels at midnight (0.49 ± 0.05 pg/ml). The circadian plasma AVP rhythm was not influenced by sex (P = 0.56) or puberty stage (P = 0.73). There was significantly higher nighttime TcH2O in prepuberty children. This concurred with increased nighttime urinary AQP2 excretion in prepuberty children. Urinary PGE2 exhibited a circadian rhythm independent of sex or puberty stage. Levels of serum 17β-estradiol and testosterone were as expected for sex and puberty stage, and no effect on the AVP-AQP2-TcH2O axis was observed. This study found a circadian rhythm of plasma AVP independent of sex and puberty stage, although nighttime TcH2O was higher and AQP2 excretion was more pronounced in prepuberty children, suggesting higher prepuberty renal AVP sensitivity.",
"title": ""
}
] |
scidocsrr
|
f74d48bb9ef4384c1fa52832509397f9
|
A CMOS Low-Dropout Regulator With Dominant-Pole Substitution
|
[
{
"docid": "39d943b04780ea83744058ed154d088a",
"text": "This paper provides a detailed analysis of the power-supply rejection ratio (PSRR) of low-dropout (LDO) regulator. The paper includes circuit modeling of a generic LDO with signal injection at its supply. Based on the modeling, the transfer function of PSRR is derived. Thorough analysis of the locations of poles and zeros obtained from the transfer function is carried out, and then recommendations to improve PSRR are given. The proposed model and the achieved results are verified by circuit simulations using BSEVI models of a commercial CMOS 0.35-μm technology. The results reveal good agreement between the modeling and the PSRR property of a LDO.",
"title": ""
}
] |
[
{
"docid": "5ecf0983a9a415d9be9a7f9a2fbc534f",
"text": "We derive in the present work topological photonic states purely based on conventional dielectric material by deforming a honeycomb lattice of cylinders into a triangular lattice of cylinder hexagons. The photonic topology is associated with a pseudo-time-reversal (TR) symmetry constituted by the TR symmetry supported in general by Maxwell equations and the C_{6} crystal symmetry upon design, which renders the Kramers doubling in the present photonic system. It is shown explicitly for the transverse magnetic mode that the role of pseudospin is played by the angular momentum of the wave function of the out-of-plane electric field. We solve Maxwell equations and demonstrate the new photonic topology by revealing pseudospin-resolved Berry curvatures of photonic bands and helical edge states characterized by Poynting vectors.",
"title": ""
},
{
"docid": "15c0f63bb4ab47e47d2bb9789cf404f4",
"text": "This review provides an account of the Study of Mathematically Precocious Youth (SMPY) after 35 years of longitudinal research. Findings from recent 20-year follow-ups from three cohorts, plus 5- or 10-year findings from all five SMPY cohorts (totaling more than 5,000 participants), are presented. SMPY has devoted particular attention to uncovering personal antecedents necessary for the development of exceptional math-science careers and to developing educational interventions to facilitate learning among intellectually precocious youth. Along with mathematical gifts, high levels of spatial ability, investigative interests, and theoretical values form a particularly promising aptitude complex indicative of potential for developing scientific expertise and of sustained commitment to scientific pursuits. Special educational opportunities, however, can markedly enhance the development of talent. Moreover, extraordinary scientific accomplishments require extraordinary commitment both in and outside of school. The theory of work adjustment (TWA) is useful in conceptualizing talent identification and development and bridging interconnections among educational, counseling, and industrial psychology. The lens of TWA can clarify how some sex differences emerge in educational settings and the world of work. For example, in the SMPY cohorts, although more mathematically precocious males than females entered math-science careers, this does not necessarily imply a loss of talent because the women secured similar proportions of advanced degrees and high-level careers in areas more correspondent with the multidimensionality of their ability-preference pattern (e.g., administration, law, medicine, and the social sciences). By their mid-30s, the men and women appeared to be happy with their life choices and viewed themselves as equally successful (and objective measures support these subjective impressions). Given the ever-increasing importance of quantitative and scientific reasoning skills in modern cultures, when mathematically gifted individuals choose to pursue careers outside engineering and the physical sciences, it should be seen as a contribution to society, not a loss of talent.",
"title": ""
},
{
"docid": "666d71b6f6646ee395c996e011b09993",
"text": "Motivated by the limitations of existing multi-view stereo benchmarks, we present a novel dataset for this task. Towards this goal, we recorded a variety of indoor and outdoor scenes using a high-precision laser scanner and captured both high-resolution DSLR imagery as well as synchronized low-resolution stereo videos with varying fields-of-view. To align the images with the laser scans, we propose a robust technique which minimizes photometric errors conditioned on the geometry. In contrast to previous datasets, our benchmark provides novel challenges and covers a diverse set of viewpoints and scene types, ranging from natural scenes to man-made indoor and outdoor environments. Furthermore, we provide data at significantly higher temporal and spatial resolution. Our benchmark is the first to cover the important use case of hand-held mobile devices while also providing high-resolution DSLR camera images. We make our datasets and an online evaluation server available at http://www.eth3d.net.",
"title": ""
},
{
"docid": "e2c6437d257559211d182b5707aca1a4",
"text": "In present times, social forums such as Quora and Yahoo! Answers constitute powerful media through which people discuss on a variety of topics and express their intentions and thoughts. Here they often reveal their potential intent to purchase ‘Purchase Intent’ (PI). A purchase intent is defined as a text expression showing a desire to purchase a product or a service in future. Extracting posts having PI from a user’s social posts gives huge opportunities towards web personalization, targeted marketing and improving community observing systems. In this paper, we explore the novel problem of detecting PIs from social posts and classifying them. We find that using linguistic features along with statistical features of PI expressions achieves a significant improvement in PI classification over ‘bag-ofwords’ based features used in many present day socialmedia classification tasks. Our approach takes into consideration the specifics of social posts like limited contextual information, incorrect grammar, language ambiguities, etc. by extracting features at two different levels of text granularity word and phrase based features and grammatical dependency based features. Apart from these, the patterns observed in PI posts help us to identify some specific features.",
"title": ""
},
{
"docid": "3a066516f52dec6150fcf4a8e081605f",
"text": "Writer: Julie Risbourg Title: Breaking the ‘glass ceiling’ Subtitle: Language: A Critical Discourse Analysis of how powerful businesswomen are portrayed in The Economist online English Pages: 52 Women still represent a minority in the executive world. Much research has been aimed at finding possible explanations concerning the underrepresentation of women in the male dominated executive sphere. The findings commonly suggest that a patriarchal society and the maintenance of gender stereotypes lead to inequalities and become obstacles for women to break the so-called ‘glass ceiling’. This thesis, however, aims to explore how businesswomen are represented once they have broken the glass ceiling and entered the executive world. Within the Forbes’ list of the 100 most powerful women of 2017, the two first businesswomen on the list were chosen, and their portrayals were analysed through articles published by The Economist online. The theoretical framework of this thesis includes Goffman’s framing theory and takes a cultural feminist perspective on exploring how the media outlet frames businesswomen Sheryl Sandberg and Mary Barra. The thesis also examines how these frames relate to the concepts of stereotyping, commonly used in the coverage of women in the media. More specifically, the study investigates whether negative stereotypes concerning their gender are present in the texts or if positive stereotypes such as idealisation are used to portray them. Those concepts are coupled with the theoretical aspect of the method, which is Critical Discourse Analysis. This method is chosen in order to explore the underlying meanings and messages The Economist chose to refer to these two businesswomen. This is done through the use of linguistic and visual tools, such as lexical choices, word connotations, nomination/functionalisation and gaze. The findings show that they were portrayed positively within a professional environment, and the publication celebrated their success and hard work. Moreover, the results also show that gender related traits were mentioned, showing a subjective representation, which is countered by their idealisation, via their presence in not only the executive world, but also having such high-working titles in male dominated industries.",
"title": ""
},
{
"docid": "f6446f5853ea6cb1ad3705c23b96edae",
"text": "Cloud-based radio access networks (C-RAN) have been proposed as a cost-efficient way of deploying small cells. Unlike conventional RANs, a C-RAN decouples the baseband processing unit (BBU) from the remote radio head (RRH), allowing for centralized operation of BBUs and scalable deployment of light-weight RRHs as small cells. In this work, we argue that the intelligent configuration of the front-haul network between the BBUs and RRHs, is essential in delivering the performance and energy benefits to the RAN and the BBU pool, respectively. We then propose FluidNet - a scalable, light-weight framework for realizing the full potential of C-RAN. FluidNet deploys a logically re-configurable front-haul to apply appropriate transmission strategies in different parts of the network and hence cater effectively to both heterogeneous user profiles and dynamic traffic load patterns. FluidNet's algorithms determine configurations that maximize the traffic demand satisfied on the RAN, while simultaneously optimizing the compute resource usage in the BBU pool. We prototype FluidNet on a 6 BBU, 6 RRH WiMAX C-RAN testbed. Prototype evaluations and large-scale simulations reveal that FluidNet's ability to re-configure its front-haul and tailor transmission strategies provides a 50% improvement in satisfying traffic demands, while reducing the compute resource usage in the BBU pool by 50% compared to baseline transmission schemes.",
"title": ""
},
{
"docid": "0ea6d4a02a4013a0f9d5aa7d27b5a674",
"text": "Recently, there has been growing interest in social network analysis. Graph models for social network analysis are usually assumed to be a deterministic graph with fixed weights for its edges or nodes. As activities of users in online social networks are changed with time, however, this assumption is too restrictive because of uncertainty, unpredictability and the time-varying nature of such real networks. The existing network measures and network sampling algorithms for complex social networks are designed basically for deterministic binary graphs with fixed weights. This results in loss of much of the information about the behavior of the network contained in its time-varying edge weights of network, such that is not an appropriate measure or sample for unveiling the important natural properties of the original network embedded in the varying edge weights. In this paper, we suggest that using stochastic graphs, in which weights associated with the edges are random variables, can be a suitable model for complex social network. Once the network model is chosen to be stochastic graphs, every aspect of the network such as path, clique, spanning tree, network measures and sampling algorithms should be treated stochastically. In particular, the network measures should be reformulated and new network sampling algorithms must be designed to reflect the stochastic nature of the network. In this paper, we first define some network measures for stochastic graphs, and then we propose four sampling algorithms based on learning automata for stochastic graphs. In order to study the performance of the proposed sampling algorithms, several experiments are conducted on real and synthetic stochastic graphs. The performances of these algorithms are studied in terms of Kolmogorov-Smirnov D statistics, relative error, Kendall’s rank correlation coefficient and relative cost.",
"title": ""
},
{
"docid": "19fe7a55a8ad6f206efc27ef7ff16324",
"text": "Vehicular adhoc networks (VANETs) are relegated as a subgroup of Mobile adhoc networks (MANETs), with the incorporation of its principles. In VANET the moving nodes are vehicles which are self-administrated, not bounded and are free to move and organize themselves in the network. VANET possess the potential of improving safety on roads by broadcasting information associated with the road conditions. This results in generation of the redundant information been disseminated by vehicles. Thus bandwidth issue becomes a major concern. In this paper, Location based data aggregation technique is been proposed for aggregating congestion related data from the road areas through which vehicles travelled. It also takes into account scheduling mechanism at the road side units (RSUs) for treating individual vehicles arriving in its range on the basis of first-cum-first order. The basic idea behind this work is to effectually disseminate the aggregation information related to congestion to RSUs as well as to the vehicles in the network. The Simulation results show that the proposed technique performs well with the network load evaluation parameters.",
"title": ""
},
{
"docid": "80b173cf8dbd0bc31ba8789298bab0fa",
"text": "This paper presents a novel statistical method for factor analysis of binary and count data which is closely related to a technique known as Latent Semantic Analysis. In contrast to the latter method which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed technique uses a generative latent class model to perform a probabilistic mixture decomposition. This results in a more principled approach with a solid foundation in statistical inference. More precisely, we propose to make use of a temperature controlled version of the Expectation Maximization algorithm for model fitting, which has shown excellent performance in practice. Probabilistic Latent Semantic Analysis has many applications, most prominently in information retrieval, natural language processing, machine learning from text, and in related areas. The paper presents perplexity results for different types of text and linguistic data collections and discusses an application in automated document indexing. The experiments indicate substantial and consistent improvements of the probabilistic method over standard Latent Semantic Analysis.",
"title": ""
},
{
"docid": "49f0d1d748d1fbfb289d6af8451c16a5",
"text": "Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today’s researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area.",
"title": ""
},
{
"docid": "9cb832657be4d4d80682c1a49249a319",
"text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.08.023 ⇑ Corresponding author. Tel.: +47 73593602; fax: + E-mail address: [email protected] This paper considers a maritime inventory routing problem faced by a major cement producer. A heterogeneous fleet of bulk ships transport multiple non-mixable cement products from producing factories to regional silo stations along the coast of Norway. Inventory constraints are present both at the factories and the silos, and there are upper and lower limits for all inventories. The ship fleet capacity is limited, and in peak periods the demand for cement products at the silos exceeds the fleet capacity. In addition, constraints regarding the capacity of the ships’ cargo holds, the depth of the ports and the fact that different cement products cannot be mixed must be taken into consideration. A construction heuristic embedded in a genetic algorithmic framework is developed. The approach adopted is used to solve real instances of the problem within reasonable solution time and with good quality solutions. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "42b9ba3cf10ff879799ae0a4272e68fa",
"text": "This article argues that ( a ) ego, or self, is an organization of knowledge, ( b ) ego is characterized by cognitive biases strikingly analogous to totalitarian information-control strategies, and ( c ) these totalitarian-ego biases junction to preserve organization in cognitive structures. Ego's cognitive biases are egocentricity (self as the focus of knowledge), \"beneffectance\" (perception of responsibility for desired, but not undesired, outcomes), and cognitive conservatism (resistance to cognitive change). In addition to being pervasively evident in recent studies of normal human cognition, these three biases are found in actively functioning, higher level organizations of knowledge, perhaps best exemplified by theoretical paradigms in science. The thesis that egocentricity, beneffectance, and conservatism act to preserve knowledge organizations leads to the proposal of an intrapsychic analog of genetic evolution, which in turn provides an alternative to prevalent motivational and informational interpretations of cognitive biases. The ego rejects the unbearable idea together with its associated affect and behaves as if the idea had never occurred to the person a t all. (Freud, 1894/1959, p. 72) Alike with the individual and the group, the past is being continually re-made, reconstructed in the interests of the present. (Bartlett, 1932, p. 309) As historians of our own lives we seem to be, on the one hand, very inattentive and, on the other, revisionists who will justify the present by changing the past. (Wixon & Laird, 1976, p. 384) \"Who controls the past,\" ran the Party slogan, \"controls the future: who controls the present controls the past.\" (Orwell, 1949, p. 32) totalitarian, was chosen only with substantial reservation because of this label's pejorative connotations. Interestingly, characteristics that seem undesirable in a political system can nonetheless serve adaptively in a personal organization of knowledge. The conception of ego as an organization of knowledge synthesizes influences from three sources --empirical, literary, and theoretical. First, recent empirical demonstrations of self-relevant cognitive biases suggest that the biases play a role in some fundamental aspect of personality. Second, George Orwell's 1984 suggests the analogy between ego's biases and totalitarian information con&ol. Last, the theories of Loevinger (1976) and Epstein ( 1973 ) suggest the additional analogy between ego's organization and theoretical organizations of scientific knowledge. The first part of this article surveys evidence indicating that ego's cognitive biases are pervasive in and characteristic of normal personalities. The second part sets forth arguments for interpreting the biases as manifestations of an effectively functioning organization of knowledge. The last section develops an explanation for the totalitarian-ego biases by analyzing their role in maintaining cognitive organization and in supporting effective behavior. I . Three Cognitive Biases: Fabrication and Revision of Personal History Ego, as an organization of knowledge (a. conclusion to be developed later), serves the functions of What follows is a portrait of self (or ego-the terms observing (perceiving) and recording (rememberare used interchangeably) constructed by intering) personal experience; it can be characterized, weaving strands drawn from several areas of recent therefore, as a perssnal historian. Many findings research. The most striking features of the portrait are three cognitive biases, which correspond disturbingly to thought control and propaganda devices Acknowledgments are given at the end of the article. Requests for reprints should be sent to Anthony G. that are to be defining characteristics of Greenwald, Department of Psychology, Ohio State Univera totalitarian political system. The epithet for ego, sity, 404C West 17th Avenue, Columbus, Ohio 43210. Copyright 1980 by the American Psychological Association, Inc. 0003466X/80/3S07-0603$00.75 from recent research in personality, cognitive, and social psychology demonstrate that ego fabricates and revises history, thereby engaging in practices not ordinarily admired in historians. These lapses in personal scholarship, or cognitive biases, are discussed below in three categories: egocentricity (self perceived as more central to events than it is), \"beneffectance\" l (self perceived as selectively responsible for desired, but not undesired, outcomes), and conservatism (resistance to cognitive",
"title": ""
},
{
"docid": "713c7761ecba317bdcac451fcc60e13d",
"text": "We describe a method for automatically transcribing guitar tablatures from audio signals in accordance with the player's proficiency for use as support for a guitar player's practice. The system estimates the multiple pitches in each time frame and the optimal fingering considering playability and player's proficiency. It combines a conventional multipitch estimation method with a basic dynamic programming method. The difficulty of the fingerings can be changed by tuning the parameter representing the relative weights of the acoustical reproducibility and the fingering easiness. Experiments conducted using synthesized guitar audio signals to evaluate the transcribed tablatures in terms of the multipitch estimation accuracy and fingering easiness demonstrated that the system can simplify the fingering with higher precision of multipitch estimation results than the conventional method.",
"title": ""
},
{
"docid": "0508c5927df12694c665cc8c7b72d6cb",
"text": "Fingerprint analysts, firearms and toolmark examiners, and forensic odontologists often rely on the uniqueness proposition in order to support their theory of identification. However, much of the literature claiming to have proven uniqueness in the forensic identification sciences is methodologically weak, and suffers flaws that negate any such conclusion being drawn. The finding of uniqueness in any study appears to be an overstatement of the significance of its results, and in several instances, this claim is made despite contrary data being presented. The mathematical and philosophical viewpoint regarding this topic is that obtaining definitive proof of uniqueness is considered impossible by modern scientific methods. More importantly, there appears to be no logical reason to pursue such research, as commentators have established that uniqueness is not the essential requirement for forming forensic conclusions. The courts have also accepted this in several recent cases in the United States, and have dismissed the concept of uniqueness as irrelevant to the more fundamental question of the reliability of the forensic analysis.",
"title": ""
},
{
"docid": "bfdbc3814d517df9859294bd53885aa2",
"text": "The Internet of Things (IoT) is the next big wave in computing characterized by large scale open ended heterogeneous network of things, with varying sensing, actuating, computing and communication capabilities. Compared to the traditional field of autonomic computing, the IoT is characterized by an open ended and highly dynamic ecosystem with variable workload and resource availability. These characteristics make it difficult to implement self-awareness capabilities for IoT to manage and optimize itself. In this work, we introduce a methodology to explore and learn the trade-offs of different deployment configurations to autonomously optimize the QoS and other quality attributes of IoT applications. Our experiments demonstrate that our proposed methodology can automate the efficient deployment of IoT applications in the presence of multiple optimization objectives and variable operational circumstances.",
"title": ""
},
{
"docid": "5d43fb2589a49de5f4f0205de79ad75c",
"text": "Many vision applications require high-accuracy dense disparity maps in real-time and online. Due to time constraint, most real-time stereo applications rely on local winner-takes-all optimization in the disparity computation process. These local approaches are generally outperformed by offline global optimization based algorithms. However, recent research shows that, through carefully selecting and aggregating the matching costs of neighboring pixels, the disparity maps produced by a local approach can be more accurate than those generated by many global optimization techniques. We are therefore motivated to investigate whether these cost aggregation approaches can be adopted in real-time stereo applications and, if so, how well they perform under the real-time constraint. The evaluation is conducted on a real-time stereo platform, which utilizes the processing power of programmable graphics hardware. Six recent cost aggregation approaches are implemented and optimized for graphics hardware so that real-time speed can be achieved. The performances of these aggregation approaches in terms of both processing speed and result quality are reported.",
"title": ""
},
{
"docid": "afbdc8a6d4db75d2025528ce0583b47f",
"text": "Ground truth annotation of the occurrence and intensity of FACS Action Unit (AU) activation requires great amount of attention. The efforts towards achieving a common platform for AU evaluation have been addressed in the FG 2015 Facial Expression Recognition and Analysis challenge (FERA 2015). Participants are invited to estimate AU occurrence and intensity on a common benchmark dataset. Conventional approaches towards achieving automated methods are to train multiclass classifiers or to use regression models. In this paper, we propose a novel application of a deep convolutional neural network (CNN) to recognize AUs as part of FERA 2015 challenge. The 7 layer network is composed of 3 convolutional layers and a max-pooling layer. The final fully connected layers provide the classification output. For the selected tasks of the challenge, we have trained two different networks for the two different datasets, where one focuses on the AU occurrences and the other on both occurrences and intensities of the AUs. The occurrence and intensity of AU activation are estimated using specific neuron activations of the output layer. This way, we are able to create a single network architecture that could simultaneously be trained to produce binary and continuous classification output.",
"title": ""
},
{
"docid": "f1166b493020d5c1f54fca517662eb40",
"text": "It is important for researchers to efficiently conduct quality literature studies. Hence, a structured and efficient approach is essential. We overview work that has demonstrated the potential for using software tools in literature reviews. We highlight the untapped opportunities in using an end-to-end tool-supported literature review methodology. Qualitative data-analysis tools such as NVivo are immensely useful as a means to analyze, synthesize, and write up literature reviews. In this paper, we describe how to organize and prepare papers for analysis and provide detailed guidelines for actually coding and analyzing papers, including detailed illustrative strategies to effectively write up and present the results. We present a detailed case study as an illustrative example of the proposed approach put into practice. We discuss the means, value, and also pitfalls of applying tool-supported literature review approaches. We contribute to the literature by proposing a four-phased tool-supported methodology that serves as best practice in conducting literature reviews in IS. By viewing the literature review process as a qualitative study and treating the literature as the “data set”, we address the complex puzzle of how best to extract relevant literature and justify its scope, relevance, and quality. We provide systematic guidelines for novice IS researchers seeking to conduct a robust literature review.",
"title": ""
},
{
"docid": "11333e88e8ff98422bdbf7d7846e9807",
"text": "As a fundamental task, document similarity measure has broad impact to document-based classification, clustering and ranking. Traditional approaches represent documents as bag-of-words and compute document similarities using measures like cosine, Jaccard, and dice. However, entity phrases rather than single words in documents can be critical for evaluating document relatedness. Moreover, types of entities and links between entities/words are also informative. We propose a method to represent a document as a typed heterogeneous information network (HIN), where the entities and relations are annotated with types. Multiple documents can be linked by the words and entities in the HIN. Consequently, we convert the document similarity problem to a graph distance problem. Intuitively, there could be multiple paths between a pair of documents. We propose to use the meta-path defined in HIN to compute distance between documents. Instead of burdening user to define meaningful meta paths, an automatic method is proposed to rank the meta-paths. Given the meta-paths associated with ranking scores, an HIN-based similarity measure, KnowSim, is proposed to compute document similarities. Using Freebase, a well-known world knowledge base, to conduct semantic parsing and construct HIN for documents, our experiments on 20Newsgroups and RCV1 datasets show that KnowSim generates impressive high-quality document clustering.",
"title": ""
},
{
"docid": "58b825902e652cc2ae0bfd867bd4f5d9",
"text": "Considers present and future practical applications of cross-reality. From tools to build new 3D virtual worlds to the products of those tools, cross-reality is becoming a staple of our everyday reality. Practical applications of cross-reality include the ability to virtually visit a factory to manage and maintain resources from the comfort of your laptop or desktop PC as well as sentient visors that augment reality with additional information so that users can make more informed choices. Tools and projects considered are:Project Wonderland for multiuser mixed reality;ClearWorlds: mixed- reality presence through virtual clearboards; VICI (Visualization of Immersive and Contextual Information) for ubiquitous augmented reality based on a tangible user interface; Mirror World Chocolate Factory; and sentient visors for browsing the world.",
"title": ""
}
] |
scidocsrr
|
002fba58f96c79a98229f37567fa4363
|
Pretty as a Princess: Longitudinal Effects of Engagement With Disney Princesses on Gender Stereotypes, Body Esteem, and Prosocial Behavior in Children.
|
[
{
"docid": "b4dcc5c36c86f9b1fef32839d3a1484d",
"text": "The popular Disney Princess line includes nine films (e.g., Snow White, Beauty and the Beast) and over 25,000 marketable products. Gender role depictions of the prince and princess characters were examined with a focus on their behavioral characteristics and climactic outcomes in the films. Results suggest that the prince and princess characters differ in their portrayal of traditionally masculine and feminine characteristics, these gender role portrayals are complex, and trends towards egalitarian gender roles are not linear over time. Content coding analyses demonstrate that all of the movies portray some stereotypical representations of gender, including the most recent film, The Princess and the Frog. Although both the male and female roles have changed over time in the Disney Princess line, the male characters exhibit more androgyny throughout and less change in their gender role portrayals.",
"title": ""
},
{
"docid": "3d7fabdd5f56c683de20640abccafc44",
"text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.",
"title": ""
}
] |
[
{
"docid": "761be34401cc6ef1d8eea56465effca9",
"text": "Résumé: Dans cet article, nous proposons une nouvelle approche pour le résumé automatique de textes utilisant un algorithme d'apprentissage numérique spécifique à la tâche d'ordonnancement. L'objectif est d'extraire les phrases d'un document qui sont les plus représentatives de son contenu. Pour se faire, chaque phrase d'un document est représentée par un vecteur de scores de pertinence, où chaque score est un score de similarité entre une requête particulière et la phrase considérée. L'algorithme d'ordonnancement effectue alors une combinaison linéaire de ces scores, avec pour but d'affecter aux phrases pertinentes d'un document des scores supérieurs à ceux des phrases non pertinentes du même document. Les algorithmes d'ordonnancement ont montré leur efficacité en particulier dans le domaine de la méta-recherche, et leur utilisation pour le résumé est motivée par une analogie peut être faite entre la méta-recherche et le résumé automatique qui consiste, dans notre cas, à considérer les similarités des phrases avec les différentes requêtes comme étant des sorties de différents moteurs de recherche. Nous montrons empiriquement que l'algorithme d'ordonnancement a de meilleures performances qu'une approche utilisant un algorithme de classification sur deux corpus distincts.",
"title": ""
},
{
"docid": "c7daf28d656a9e51e5a738e70beeadcf",
"text": "We present a taxonomy for Information Visualization (IV) that characterizes it in terms of data, task, skill and context, as well as a number of dimensions that relate to the input and output hardware, the software tools, as well as user interactions and human perceptual abil ities. We il lustrate the utilit y of the taxonomy by focusing particularly on the information retrieval task and the importance of taking into account human perceptual capabiliti es and limitations. Although the relevance of Psychology to IV is often recognised, we have seen relatively littl e translation of psychological results and theory to practical IV applications. This paper targets the better development of information visualizations through the introduction of a framework delineating the major factors in interface development. We believe that higher quality visualizations will result from structured developments that take into account these considerations and that the framework will also serve to assist the development of effective evaluation and assessment processes.",
"title": ""
},
{
"docid": "a76826da7f077cf41aaa7c8eca9be3fe",
"text": "In this paper we present an open-source design for the development of low-complexity, anthropomorphic, underactuated robot hands with a selectively lockable differential mechanism. The differential mechanism used is a variation of the whiffletree (or seesaw) mechanism, which introduces a set of locking buttons that can block the motion of each finger. The proposed design is unique since with a single motor and the proposed differential mechanism the user is able to control each finger independently and switch between different grasping postures in an intuitive manner. Anthropomorphism of robot structure and motion is achieved by employing in the design process an index of anthropomorphism. The proposed robot hands can be easily fabricated using low-cost, off-the-shelf materials and rapid prototyping techniques. The efficacy of the proposed design is validated through different experimental paradigms involving grasping of everyday life objects and execution of daily life activities. The proposed hands can be used as affordable prostheses, helping amputees regain their lost dexterity.",
"title": ""
},
{
"docid": "5a2649736269f7be88886c2a45243492",
"text": "Modern computer displays tend to be in fixed size, rigid, and rectilinear rendering them insensitive to the visual area demands of an application or the desires of the user. Foldable displays offer the ability to reshape and resize the interactive surface at our convenience and even permit us to carry a very large display surface in a small volume. In this paper, we implement four interactive foldable display designs using image projection with low-cost tracking and explore display behaviors using orientation sensitivity.",
"title": ""
},
{
"docid": "7f0dd680faf446e74aff177dc97b5268",
"text": "Vehicle Ad-Hoc Networks (VANET) enable all components in intelligent transportation systems to be connected so as to improve transport safety, relieve traffic congestion, reduce air pollution, and enhance driving comfort. The vision of all vehicles connected poses a significant challenge to the collection, storage, and analysis of big traffic-related data. Vehicular cloud computing, which incorporates cloud computing into vehicular networks, emerges as a promising solution. Different from conventional cloud computing platform, the vehicle mobility poses new challenges to the allocation and management of cloud resources in roadside cloudlet. In this paper, we study a virtual machine (VM) migration problem in roadside cloudletbased vehicular network and unfold that (1) whether a VM shall be migrated or not along with the vehicle moving and (2) where a VM shall be migrated, in order to minimize the overall network cost for both VM migration and normal data traffic. We first treat the problem as a static off-line VM placement problem and formulate it into a mixed-integer quadratic programming problem. A heuristic algorithm with polynomial time is then proposed to tackle the complexity of solving mixed-integer quadratic programming. Extensive simulation results show that it produces near-optimal performance and outperforms other related algorithms significantly. Copyright © 2015 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "367c6ce6d83baff7de78e9d128123ce8",
"text": "Supporting smooth movement of mobile clients is important when offloading services on an edge computing platform. Interruption-free client mobility demands seamless migration of the offloading service to nearby edge servers. However, fast migration of offloading services across edge servers in a WAN environment poses significant challenges to the handoff service design. In this paper, we present a novel service handoff system which seamlessly migrates offloading services to the nearest edge server, while the mobile client is moving. Service handoff is achieved via container migration. We identify an important performance problem during Docker container migration. Based on our systematic study of container layer management and image stacking, we propose a migration method which leverages the layered storage system to reduce file system synchronization overhead, without dependence on the distributed file system. We implement a prototype system and conduct experiments using real world product applications. Evaluation results reveal that compared to state-of-the-art service handoff systems designed for edge computing platforms, our system reduces the total duration of service handoff time by 80%(56%) with network bandwidth 5Mbps(20Mbps).",
"title": ""
},
{
"docid": "20af5209de71897158820f935018d877",
"text": "This paper presents a new bag-of-entities representation for document ranking, with the help of modern knowledge bases and automatic entity linking. Our system represents query and documents by bag-of-entities vectors constructed from their entity annotations, and ranks documents by their matches with the query in the entity space. Our experiments with Freebase on TREC Web Track datasets demonstrate that current entity linking systems can provide sufficient coverage of the general domain search task, and that bag-of-entities representations outperform bag-of-words by as much as 18% in standard document ranking tasks.",
"title": ""
},
{
"docid": "ee9bccbfecd58151569449911c624221",
"text": "Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.",
"title": ""
},
{
"docid": "7cfeadc550f412bb92df4f265bf99de0",
"text": "AIM\nCorrective image reconstruction methods which produce reconstructed images with improved spatial resolution and decreased noise level became recently commercially available. In this work, we tested the performance of three new software packages with reconstruction schemes recommended by the manufacturers using physical phantoms simulating realistic clinical settings.\n\n\nMETHODS\nA specially designed resolution phantom containing three (99m)Tc lines sources and the NEMA NU-2 image quality phantom were acquired on three different SPECT/CT systems (General Electrics Infinia, Philips BrightView and Siemens Symbia T6). Measurement of both phantoms was done with the trunk filled with a (99m)Tc-water solution. The projection data were reconstructed using the GE's Evolution for Bone(®), Philips Astonish(®) and Siemens Flash3D(®) software. The reconstruction parameters employed (number of iterations and subsets, the choice of post-filtering) followed theses recommendations of each vendor. These results were compared with reference reconstructions using the ordered subset expectation maximization (OSEM) reconstruction scheme.\n\n\nRESULTS\nThe best results (smallest value for resolution, highest percent contrast values) for all three packages were found for the scatter corrected data without applying any post-filtering. The advanced reconstruction methods improve the full width at half maximum (FWHM) of the line sources from 11.4 to 9.5mm (GE), from 9.1 to 6.4mm (Philips), and from 12.1 to 8.9 mm (Siemens) if no additional post filter was applied. The total image quality control index measured for a concentration ratio of 8:1 improves for GE from 147 to 189, from 179. to 325 for Philips and from 217 to 320 for Siemens using the reference method for comparison. The same trends can be observed for the 4:1 concentration ratio. The use of a post-filter reduces the background variability approximately by a factor of two, but deteriorates significantly the spatial resolution.\n\n\nCONCLUSIONS\nUsing advanced reconstruction algorithms the largest improvement in image resolution and contrast is found for the scatter corrected slices without applying post-filtering. The user has to choose whether noise reduction by post-filtering or improved image resolution fits better a particular imaging procedure.",
"title": ""
},
{
"docid": "5545d32ccfd1459c8c7e918c8b324eb5",
"text": "Sequence generative adversarial networks SeqGAN have been used to improve conditional sequence generation tasks, for example, chit-chat dialogue generation. To stabilize the training of SeqGAN, Monte Carlo tree search MCTS or reward at every generation step REGS is used to evaluate the goodness of a generated subsequence. MCTS is computationally intensive, but the performance of REGS is worse than MCTS. In this paper, we propose stepwise GAN StepGAN, in which the discriminator is modified to automatically assign scores quantifying the goodness of each subsequence at every generation step. StepGAN has significantly less computational costs than MCTS. We demonstrate that StepGAN outperforms previous GAN-based methods on both synthetic experiment and chit-chat dialogue generation.",
"title": ""
},
{
"docid": "94640a4ad3b32a307658ca2028dbd589",
"text": "In this paper, we investigate the diversity aspect of paraphrase generation. Prior deep learning models employ either decoding methods or add random input noise for varying outputs. We propose a simple method Diverse Paraphrase Generation (D-PAGE), which extends neural machine translation (NMT) models to support the generation of diverse paraphrases with implicit rewriting patterns. Our experimental results on two real-world benchmark datasets demonstrate that our model generates at least one order of magnitude more diverse outputs than the baselines in terms of a new evaluation metric Jeffrey’s Divergence. We have also conducted extensive experiments to understand various properties of our model with a focus on diversity.",
"title": ""
},
{
"docid": "1608c56c79af07858527473b2b0262de",
"text": "The field weakening control strategy of interior permanent magnet synchronous motor for electric vehicles was studied in the paper. A field weakening control method based on gradient descent of voltage limit according to the ellipse and modified current setting were proposed. The field weakening region was determined by the angle between the constant torque direction and the voltage limited ellipse decreasing direction. The direction of voltage limited ellipse decreasing was calculated by using the gradient descent method. The current reference was modified by the field weakening direction and the magnitude of the voltage error according to the field weakening region. A simulink model was also founded by Matlab/Simulink, and the validity of the proposed strategy was proved by the simulation results.",
"title": ""
},
{
"docid": "ec8847a65f015a52ce90bdd304103658",
"text": "This study has a purpose to investigate the adoption of online games technologies among adolescents and their behavior in playing online games. The findings showed that half of them had experience ten months or less in playing online games with ten hours or less for each time playing per week. Nearly fifty-four percent played up to five times each week where sixty-six percent played two hours or less. Behavioral Intention has significant correlation to model variables naming Perceived Enjoyment, Flow Experience, Performance Expectancy, Effort Expectancy, Social Influence, and Facilitating Conditions; Experience; and the number and duration of game sessions. The last, Performance Expectancy and Facilitating Condition had a positive, medium, and statistically direct effect on Behavioral Intention. Four other variables Perceived Enjoyment, Flow Experience, Effort Expectancy, and Social Influence had positive or negative, medium or small, and not statistically direct effect on Behavioral Intention. Additionally, Flow Experience and Social Influence have no significant different between the mean value for male and female. Other variables have significant different regard to gender, where mean value of male was significantly greater than female except for Age. Practical implications of this study are relevant to groups who have interest to enhance or to decrease the adoption of online games technologies. Those to enhance the adoption of online games technologies must: preserve Performance Expectancy and Facilitating Conditions; enhance Flow Experience, Perceived Enjoyment, Effort Expectancy, and Social Influence; and engage the adolescent's online games behavior, specifically supporting them in longer playing games and in enhancing their experience. The opposite actions to these proposed can be considered to decrease the adoption.",
"title": ""
},
{
"docid": "04eb3cb8f83277b552d9cb80d990cce0",
"text": "The growing momentum of the Internet of Things (IoT) has shown an increase in attack vectors within the security research community. We propose adapting a recent new approach of frequently changing IPv6 address assignment to add an additional layer of security to the Internet of Things. We examine implementing Moving Target IPv6 Defense (MT6D) in IPv6 over Low-Powered Wireless Personal Area Networks (6LoWPAN); a protocol that is being used in wireless sensors found in home automation systems and smart meters. 6LoWPAN allows the Internet of Things to extend into the world of wireless sensor networks. We propose adapting Moving-Target IPv6 Defense for use with 6LoWPAN in order to defend against network-side attacks such as Denial-of-Service and Man-In-The-Middle while maintaining anonymity of client-server communications. This research aims in providing a moving-target defense for wireless sensor networks while maintaining power efficiency within the network.",
"title": ""
},
{
"docid": "6ca68f39cd15b3e698d8df8c99e160a6",
"text": "This paper proposed a novel isolated bidirectional flyback converter integrated with two non-dissipative LC snubbers. In the proposed topology, the main flyback transformer and the LC snubbers are crossed-coupled to reduce current circulation and recycle the leakage energy. The proposed isolated bidirectional flyback converter can step-up the voltage of the battery (Vbat = 12V) to a high voltage side (VHV = 200V) for the load demand and vice versa. The main goal of this paper is to demonstrate the performances of this topology to achieve high voltage gain with less switching losses and reduce components stresses. The circuit analysis conferred in detail for Continuous Conduction Mode (CCM). Lastly, a laboratory prototype constructed to compare with simulation result.",
"title": ""
},
{
"docid": "611c8ce42410f8f678aa5cb5c0de535b",
"text": "User simulators are a principal offline method for training and evaluating human-computer dialog systems. In this paper, we examine simple sequence-to-sequence neural network architectures for training end-to-end, natural language to natural language, user simulators, using only raw logs of previous interactions without any additional human labelling. We compare the neural network-based simulators with a language model (LM)-based approach for creating natural language user simulators. Using both an automatic evaluation using LM perplexity and a human evaluation, we demonstrate that the sequence-tosequence approaches outperform the LM-based method. We show correlation between LM perplexity and the human evaluation on this task, and discuss the benefits of different neural network architecture variations.",
"title": ""
},
{
"docid": "69944e5a5a23abf66be23fe6a56d53cc",
"text": "A 71-76 GHz high dynamic range CMOS RF variable gain amplifier (VGA) is presented. Variable gain is achieved using two current-steering trans-conductance stages, which provide high linearity with relatively low power consumption. The circuit is fabricated in a MS/RF 90-nm CMOS technology and consumes 18-mA total current from a 2-V supply. This VGA achieves a 14-dB maximum gain, a 30-dB gain controlled range, and a 4-dBm output saturation power. To the authorpsilas knowledge, this VGA demonstrates the highest operation frequency among the reported CMOS VGAs.",
"title": ""
},
{
"docid": "bf1b556a1617674ca7b560aa48731f76",
"text": "The increasing complexity of configuring cellular networks suggests that machine learning (ML) can effectively improve 5G technologies. Deep learning has proven successful in ML tasks such as speech processing and computational vision, with a performance that scales with the amount of available data. The lack of large datasets inhibits the flourish of deep learning applications in wireless communications. This paper presents a methodology that combines a vehicle traffic simulator with a raytracing simulator, to generate channel realizations representing 5G scenarios with mobility of both transceivers and objects. The paper then describes a specific dataset for investigating beamselection techniques on vehicle-to-infrastructure using millimeter waves. Experiments using deep learning in classification, regression and reinforcement learning problems illustrate the use of datasets generated with the proposed methodology.",
"title": ""
},
{
"docid": "27f001247d02f075c9279b37acaa49b3",
"text": "A Zadoff–Chu (ZC) sequence is uncorrelated with a non-zero cyclically shifted version of itself. However, this alone is insufficient to mitigate inter-code interference in LTE initial uplink synchronization. The performance of the state-of-the-art algorithms vary widely depending on the specific ZC sequences employed. We develop a systematic procedure to choose the ZC sequences that yield the optimum performance. It turns out that the procedure for ZC code selection in LTE standard is suboptimal when the carrier frequency offset is not small.",
"title": ""
},
{
"docid": "bd9f01cad764a03f1e6cded149b9adbd",
"text": "Psycholinguistic research has shown that the influence of abstract syntactic knowledge on performance is shaped by particular sentences that have been experienced. To explore this idea, the authors applied a connectionist model of sentence production to the development and use of abstract syntax. The model makes use of (a) error-based learning to acquire and adapt sequencing mechanisms and (b) meaning-form mappings to derive syntactic representations. The model is able to account for most of what is known about structural priming in adult speakers, as well as key findings in preferential looking and elicited production studies of language acquisition. The model suggests how abstract knowledge and concrete experience are balanced in the development and use of syntax.",
"title": ""
}
] |
scidocsrr
|
c0eea7eb12bee90fe9a163ea76a5e51c
|
Identifying Meaningful Citations
|
[
{
"docid": "a78149e30a677c320cab3540d55adc4f",
"text": "We develop Markov topic models (MTMs), a novel family of generative probabilistic models that can learn topics simultaneously from multiple corpora, such as papers from different conferences. We apply Gaussian (Markov) random fields to model the correlations of different corpora. MTMs capture both the internal topic structure within each corpus and the relationships between topics across the corpora. We derive an efficient estimation procedure with variational expectation-maximization. We study the performance of our models on a corpus of abstracts from six different computer science conferences. Our analysis reveals qualitative discoveries that are not possible with traditional topic models, and improved quantitative performance over the state of the art.",
"title": ""
},
{
"docid": "7d3c07b505e27fdfea4ada999a233169",
"text": "Discriminatively trained undirected graphical models have had wide empirical success, and there has been increasing interest in toolkits that ease their application to complex relational data. The power in relational models is in their repeated structure and tied parameters; at issue is how to define these structures in a powerful and flexible way. Rather than using a declarative language, such as SQL or first-order logic, we advocate using an imperative language to express various aspects of model structure, inference, and learning. By combining the traditional, declarative, statistical semantics of factor graphs with imperative definitions of their construction and operation, we allow the user to mix declarative and procedural domain knowledge, and also gain significant efficiencies. We have implemented such imperatively defined factor graphs in a system we call FACTORIE, a software library for an object-oriented, strongly-typed, functional language. In experimental comparisons to Markov Logic Networks on joint segmentation and coreference, we find our approach to be 3-15 times faster while reducing error by 20-25%—achieving a new state of the art.",
"title": ""
}
] |
[
{
"docid": "8689b038c62d96adf1536594fcc95c07",
"text": "We present an interactive system that allows users to design original pop-up cards. A pop-up card is an interesting form of papercraft consisting of folded paper that forms a three-dimensional structure when opened. However, it is very difficult for the average person to design pop-up cards from scratch because it is necessary to understand the mechanism and determine the positions of objects so that pop-up parts do not collide with each other or protrude from the card. In the proposed system, the user interactively sets and edits primitives that are predefined in the system. The system simulates folding and opening of the pop-up card using a mass–spring model that can simply simulate the physical movement of the card. This simulation detects collisions and protrusions and illustrates the movement of the pop-up card. The results of the present study reveal that the user can design a wide range of pop-up cards using the proposed system.",
"title": ""
},
{
"docid": "0fb16cdc0b8b8371493fb57cbfacec4f",
"text": "Recent years have seen an expansion of interest in non-pharmacological interventions for attention-deficit/hyperactivity disorder (ADHD). Although considerable treatment development has focused on cognitive training programs, compelling evidence indicates that intense aerobic exercise enhances brain structure and function, and as such, might be beneficial to children with ADHD. This paper reviews evidence for a direct impact of exercise on neural functioning and preliminary evidence that exercise may have positive effects on children with ADHD. At present, data are promising and support the need for further study, but are insufficient to recommend widespread use of such interventions for children with ADHD.",
"title": ""
},
{
"docid": "1ca8d5d0e5a318398a50f647d4364905",
"text": "The applications of speech interfaces, commonly used for search and personal assistants, are diversifying to include wearables, appliances, and robots. Hardware-accelerated automatic speech recognition (ASR) is needed for scenarios that are constrained by power, system complexity, or latency. Furthermore, a wakeup mechanism, such as voice activity detection (VAD), is needed to power gate the ASR and downstream system. This paper describes IC designs for ASR and VAD that improve on the accuracy, programmability, and scalability of previous work.",
"title": ""
},
{
"docid": "0dc3c4e628053e8f7c32c0074a2d1a59",
"text": "Understanding inter-character relationships is fundamental for understanding character intentions and goals in a narrative. This paper addresses unsupervised modeling of relationships between characters. We model relationships as dynamic phenomenon, represented as evolving sequences of latent states empirically learned from data. Unlike most previous work our approach is completely unsupervised. This enables data-driven inference of inter-character relationship types beyond simple sentiment polarities, by incorporating lexical and semantic representations, and leveraging large quantities of raw text. We present three models based on rich sets of linguistic features that capture various cues about relationships. We compare these models with existing techniques and also demonstrate that relationship categories learned by our model are semantically coherent.",
"title": ""
},
{
"docid": "d93bc6fa3822dac43949d72a82e5c047",
"text": "In breast cancer, gene expression analyses have defined five tumor subtypes (luminal A, luminal B, HER2-enriched, basal-like and claudin-low), each of which has unique biologic and prognostic features. Here, we comprehensively characterize the recently identified claudin-low tumor subtype. The clinical, pathological and biological features of claudin-low tumors were compared to the other tumor subtypes using an updated human tumor database and multiple independent data sets. These main features of claudin-low tumors were also evaluated in a panel of breast cancer cell lines and genetically engineered mouse models. Claudin-low tumors are characterized by the low to absent expression of luminal differentiation markers, high enrichment for epithelial-to-mesenchymal transition markers, immune response genes and cancer stem cell-like features. Clinically, the majority of claudin-low tumors are poor prognosis estrogen receptor (ER)-negative, progesterone receptor (PR)-negative, and epidermal growth factor receptor 2 (HER2)-negative (triple negative) invasive ductal carcinomas with a high frequency of metaplastic and medullary differentiation. They also have a response rate to standard preoperative chemotherapy that is intermediate between that of basal-like and luminal tumors. Interestingly, we show that a group of highly utilized breast cancer cell lines, and several genetically engineered mouse models, express the claudin-low phenotype. Finally, we confirm that a prognostically relevant differentiation hierarchy exists across all breast cancers in which the claudin-low subtype most closely resembles the mammary epithelial stem cell. These results should help to improve our understanding of the biologic heterogeneity of breast cancer and provide tools for the further evaluation of the unique biology of claudin-low tumors and cell lines.",
"title": ""
},
{
"docid": "f5d6bfa66e4996bddc6ca1fbecc6c25d",
"text": "Internet-connected consumer electronics marketed as smart devices (also known as Internet-of-Things devices) usually lack essential security protection mechanisms. This puts user privacy and security in great danger. One of the essential steps to compromise vulnerable devices is locating them through horizontal port scans. In this paper, we focus on the problem of detecting horizontal port scans in home networks. We propose a software-defined networking (SDN)-based firewall platform that is capable of detecting horizontal port scans. Current SDN implementations (e.g., OpenFlow) do not provide access to packet-level information, which is essential for network security applications, due to performance limitations. Our platform uses FleXight, our proposed new information channel between SDN controller and data path elements to access packet-level information. FleXight uses per-flow sampling and dynamical sampling rate adjustments to provide the necessary information to the controller while keeping the overhead very low. We evaluate our solution on a large real-world packet trace from an ISP and show that our system can identify all attackers and 99% of susceptible victims with only 0.75% network overhead. We also present a detailed usability analysis of our system.",
"title": ""
},
{
"docid": "ff685a2272377e3c8b3596ed92eaccd8",
"text": "The goal of control law design for haptic displays is to provide a safe and stable user interface while maximizing the operator’s sense of kinesthetic immersion in a virtual environment. This paper outlines a control design approach which guarantees the stability of a haptic interface when coupled to a broad class of human operators and virtual environments. Two-port absolute stability criteria are used to develop explicit control law design bounds for two different haptic display implementations: impedance display and admittance display. The strengths and weaknesses of each approach are illustrated through numerical and experimental results for a three degree-of-freedom device. The example highlights the ability of the proposed design procedure to handle some of the more difficult problems in control law synthesis for haptics, including structural flexibility and non-collocation of sensors and actuators. The authors are with the Department of Electrical Engineering University of Washington, Box 352500 Seattle, WA 98195-2500 * corresponding author submitted to IEEE Transactions on Control System Technology 9-7-99 2",
"title": ""
},
{
"docid": "3e675da307edb6543363990d8edfb679",
"text": "In this paper, we propose a modified marginalized autoencoders. Here, the noise adding way at a fixed rate in marginalized autoencoders is replaced by the adaptive noise injection. Compared with the traditional marginalized autoencoders, the proposed method obviously enlarges the recognition performance. Furthermore, the proposed method is applied to identify high-speed train wheel wear conditions. Features of high speed train wheels wear vibration signals are abstracted by using the adaptive noise marginalized autoencoders, and the features is used to realize the wheel wear characteristics of vibration signal recognitions as the input of support vector machine (SVM). The experimental results show that the accuracy of the new method for identifying high-speed train wheel wear conditions is 99.8% on average.",
"title": ""
},
{
"docid": "35463670bc80c009f811f97165db33e1",
"text": "Framing is the process by which a communication source constructs and defines a social or political issue for its audience. While many observers of political communication and the mass media have discussed framing, few have explicitly described how framing affects public opinion. In this paper we offer a theory of framing effects, with a specific focus on the psychological mechanisms by which framing influences political attitudes. We discuss important conceptual differences between framing and traditional theories of persuasion that focus on belief change. We outline a set of hypotheses about the interaction between framing and audience sophistication, and test these in an experiment. The results support our argument that framing is not merely persuasion, as it is traditionally conceived. We close by reflecting on the various routes by which political communications can influence attitudes.",
"title": ""
},
{
"docid": "4e7003b497dc59c373347d8814c8f83e",
"text": "The present experiment was designed to test whether specific recordable changes in the neuromuscular system could be associated with specific alterations in soft- and hard-tissue morphology in the craniofacial region. The effect of experimentally induced neuromuscular changes on the craniofacial skeleton and dentition of eight rhesus monkeys was studied. The neuromuscular changes were triggered by complete nasal airway obstruction and the need for an oral airway. Alterations were also triggered 2 years later by removal of the obstruction and the return to nasal breathing. Changes in neuromuscular recruitment patterns resulted in changed function and posture of the mandible, tongue, and upper lip. There was considerable variation among the animals. Statistically significant morphologic effects of the induced changes were documented in several of the measured variables after the 2-year experimental period. The anterior face height increased more in the experimental animals than in the control animals; the occlusal and mandibular plane angles measured to the sella-nasion line increased; and anterior crossbites and malposition of teeth occurred. During the postexperimental period some of these changes were reversed. Alterations in soft-tissue morphology were also observed during both experimental periods. There was considerable variation in morphologic response among the animals. It was concluded that the marked individual variations in skeletal morphology and dentition resulting from the procedures were due to the variation in nature and degree of neuromuscular and soft-tissue adaptations in response to the altered function. The recorded neuromuscular recruitment patterns could not be directly related to specific changes in morphology.",
"title": ""
},
{
"docid": "e0ca1c29ef4cdc73debabcc4409bd8eb",
"text": "The Internet of Things (IoT) will enable objects to become active participants of everyday activities. Introducing objects into the control processes of complex systems makes IoT security very difficult to address. Indeed, the Internet of Things is a complex paradigm in which people interact with the technological ecosystem based on smart objects through complex processes. The interactions of these four IoT components, person, intelligent object, technological ecosystem, and process, highlight a systemic and cognitive dimension within security of the IoT. The interaction of people with the technological ecosystem requires the protection of their privacy. Similarly, their interaction with control processes requires the guarantee of their safety. Processes must ensure their reliability and realize the objectives for which they are designed. We believe that the move towards a greater autonomy for objects will bring the security of technologies and processes and the privacy of individuals into sharper focus. Furthermore, in parallel with the increasing autonomy of objects to perceive and act on the environment, IoT security should move towards a greater autonomy in perceiving threats and reacting to attacks, based on a cognitive and systemic approach. In this work, we will analyze the role of each of the mentioned actors in IoT security and their relationships, in order to highlight the research challenges and present our approach to these issues based on a holistic vision of IoT security.",
"title": ""
},
{
"docid": "d0b8dc38b0a293e5442276676afc02c9",
"text": "A fundamental dilemma in reinforcement learning is the exploration-exploitation trade-off. Deep reinforcement learning enables agents to act and learn in complex environments, but also introduces new challenges to both exploration and exploitation. Concepts like intrinsic motivation, hierarchical learning or curriculum learning all inspire different methods for exploration, while other agents profit from better methods to exploit current knowledge. In this work a survey of a variety of different approaches to exploration and exploitation in deep reinforcement learning is presented.",
"title": ""
},
{
"docid": "c5428f44292952bfb9443f61aa6d6ce0",
"text": "In this letter, a tunable protection switch device using open stubs for $X$ -band low-noise amplifiers (LNAs) is proposed. The protection switch is implemented using p-i-n diodes. As the parasitic inductance in the p-i-n diodes may degrade the protection performance, tunable open stubs are attached to these diodes to obtain a grounding effect. The performance is optimized for the desired frequency band by adjusting the lengths of the microstrip line open stubs. The designed LNA protection switch is fabricated and measured, and sufficient isolation is obtained for a 200 MHz operating band. The proposed protection switch is suitable for solid-state power amplifier radars in which the LNAs need to be protected from relatively long pulses.",
"title": ""
},
{
"docid": "f715f471118b169502941797d17ceac6",
"text": "Software is a knowledge intensive product, which can only evolve if there is effective and efficient information exchange between developers. Complying to coding conventions improves information exchange by improving the readability of source code. However, without some form of enforcement, compliance to coding conventions is limited. We look at the problem of information exchange in code and propose gamification as a way to motivate developers to invest in compliance. Our concept consists of a technical prototype and its integration into a Scrum environment. By means of two experiments with agile software teams and subsequent surveys, we show that gamification can effectively improve adherence to coding conventions.",
"title": ""
},
{
"docid": "d62f746c295339b3a3481a60f4015c9c",
"text": "Electrotactile stimulation is a common method of sensory substitution and haptic feedback. One problem with this method has been the large variability in perceived sensation that derives from changes in the impedance of the electrode-skin interface. One way to reduce this variability is to modulate stimulation parameters (current amplitude and pulse duration) in response to impedance changes, which are reflected in the time domain by changes in measured peak resistance, Rp. To work well, this approach requires knowing precisely the relationship between stimulation parameters, peak resistance, and perceived sensation. In this paper, experimental results show that at a constant level of perceived sensation there are linear relationships between Rp and both peak pulse energy, Ep, and phase charge, Q, from which stimulation parameters are easily computed. These linear relationships held across different subjects, sessions, magnitudes of sensation, stimulation locations, and electrode sizes. The average R2 values for these linear relationships were 0.957 for Ep vs. Rp and 0.960 for Q vs. Rp, indicating a nearly perfect fit.",
"title": ""
},
{
"docid": "ac43f790e48424bece26439799654624",
"text": "A scheme of evaluating an impact of a given scientific paper based on importance of papers quoting it is investigated. Introducing a weight of a given citation, dependent on the previous scientific achievements of the author of the citing paper, we define the weighting factor of a given scientist. Technically the weighting factors are defined by the components of the normalized leading eigenvector of the matrix describing the citation graph. The weighting factor of a given scientist, reflecting the scientific output of other researchers quoting his work, allows us to define weighted number of citation of a given paper, weighted impact factor of a journal and weighted Hirsch index of an individual scientist or of an entire scientific institution.",
"title": ""
},
{
"docid": "0604c1ed7ea5a57387d013a5f94f8c00",
"text": "Many current Internet services rely on inferences from models trained on user data. Commonly, both the training and inference tasks are carried out using cloud resources fed by personal data collected at scale from users. Holding and using such large collections of personal data in the cloud creates privacy risks to the data subjects, but is currently required for users to benefit from such services. We explore how to provide for model training and inference in a system where computation is pushed to the data in preference to moving data to the cloud, obviating many current privacy risks. Specifically, we take an initial model learnt from a small set of users and retrain it locally using data from a single user. We evaluate on two tasks: one supervised learning task, using a neural network to recognise users' current activity from accelerometer traces; and one unsupervised learning task, identifying topics in a large set of documents. In both cases the accuracy is improved. We also analyse the robustness of our approach against adversarial attacks, as well as its feasibility by presenting a performance evaluation on a representative resource-constrained device (a Raspberry Pi).",
"title": ""
},
{
"docid": "5455a8fd6e6be03e3a4163665425247d",
"text": "The change in spring phenology is recognized to exert a major influence on carbon balance dynamics in temperate ecosystems. Over the past several decades, several studies focused on shifts in spring phenology; however, large uncertainties still exist, and one understudied source could be the method implemented in retrieving satellite-derived spring phenology. To account for this potential uncertainty, we conducted a multimethod investigation to quantify changes in vegetation green-up date from 1982 to 2010 over temperate China, and to characterize climatic controls on spring phenology. Over temperate China, the five methods estimated that the vegetation green-up onset date advanced, on average, at a rate of 1.3 ± 0.6 days per decade (ranging from 0.4 to 1.9 days per decade) over the last 29 years. Moreover, the sign of the trends in vegetation green-up date derived from the five methods were broadly consistent spatially and for different vegetation types, but with large differences in the magnitude of the trend. The large intermethod variance was notably observed in arid and semiarid vegetation types. Our results also showed that change in vegetation green-up date is more closely correlated with temperature than with precipitation. However, the temperature sensitivity of spring vegetation green-up date became higher as precipitation increased, implying that precipitation is an important regulator of the response of vegetation spring phenology to change in temperature. This intricate linkage between spring phenology and precipitation must be taken into account in current phenological models which are mostly driven by temperature.",
"title": ""
},
{
"docid": "02e961880a7925eb9d41c372498cb8d0",
"text": "Since debt is typically riskier in recessions, transfers from equity holders to debt holders associated with each investment also tend to concentrate in recessions. Such systematic risk exposure of debt overhang has important implications for the investment and financing decisions of firms and on the ex ante costs of debt overhang. Using a calibrated dynamic capital structure/real option model, we show that the costs of debt overhang become significantly higher in the presence of macroeconomic risk. We also provide several new predictions that relate the cyclicality of a firm’s assets in place and growth options to its investment and capital structure decisions. We are grateful to Santiago Bazdresch, Bob Goldstein, David Mauer (WFA discussant), Erwan Morellec, Stew Myers, Chris Parsons, Michael Roberts, Antoinette Schoar, Neng Wang, Ivo Welch, and seminar participants at MIT, Federal Reserve Bank of Boston, Boston University, Dartmouth, University of Lausanne, University of Minnesota, the Third Risk Management Conference at Mont Tremblant, the Minnesota Corporate Finance Conference, and the WFA for their comments. MIT Sloan School of Management and NBER. Email: [email protected]. Tel. 617-324-3896. MIT Sloan School of Management. Email: [email protected]. Tel. 617-253-7218.",
"title": ""
},
{
"docid": "1141a01de74dd684f076a1ba402325cb",
"text": "AIMS\nIn several studies, possible risk factors/predictors for severe alcohol withdrawal syndrome (AWS), i.e. delirium tremens (DT) and/or seizures, have been investigated. We have recently observed that low blood platelet count could be such a risk factor/predictor. We therefore investigated whether such an association could be found using a large number of alcohol-dependent individuals (n = 334).\n\n\nMETHODS\nThis study is a retrospectively conducted cohort study based on data from female and male patients (>20 years of age), consecutively admitted to an alcohol treatment unit. The individuals had to fulfil the discharge diagnoses alcohol dependence and alcohol withdrawal syndrome according to DSM-IV.\n\n\nRESULTS\nDuring the treatment period, 3% of the patients developed DT, 2% seizures and none had co-occurrence of both conditions. Among those with DT, a higher proportion had thrombocytopenia. Those with seizures had lower blood platelet count and a higher proportion of them had thrombocytopenia. The sensitivity and specificity of thrombocytopenia for the development of DT during the treatment period was 70% and 69%, respectively. The positive predictive value (PPV) was 6% and the negative predictive value (NPV) was 99%. For the development of seizures, the figure for sensitivity was 75% and for specificity 69%. The figures for PPV and NPV were similar as those for the development of DT.\n\n\nCONCLUSIONS\nThrombocytopenia is more frequent in patients who develop severe AWS (DT or seizures). The findings, including the high NPV of thrombocytopenia, must be interpreted with caution due to the small number of patients who developed AWS. Further studies replicating the present finding are therefore needed before the clinical usefulness can be considered.",
"title": ""
}
] |
scidocsrr
|
e5007e7be2bbcdccdca180e672cc82ff
|
The Role of Lactic Acid Bacteria in Milk Fermentation
|
[
{
"docid": "1007cd10c262718fe108c9ddb0df1091",
"text": "Shalgam juice, hardaliye, boza, ayran (yoghurt drink) and kefir are the most known traditional Turkish fermented non-alcoholic beverages. The first three are obtained from vegetables, fruits and cereals, and the last two ones are made of milk. Shalgam juice, hardaliye and ayran are produced by lactic acid fermentation. Their microbiota is mainly composed of lactic acid bacteria (LAB). Lactobacillus plantarum, Lactobacillus brevis and Lactobacillus paracasei subsp. paracasei in shalgam fermentation and L. paracasei subsp. paracasei and Lactobacillus casei subsp. pseudoplantarum in hardaliye fermentation are predominant. Ayran is traditionally prepared by mixing yoghurt with water and salt. Yoghurt starter cultures are used in industrial ayran production. On the other hand, both alcohol and lactic acid fermentation occur in boza and kefir. Boza is prepared by using a mixture of maize, wheat and rice or their flours and water. Generally previously produced boza or sourdough/yoghurt are used as starter culture which is rich in Lactobacillus spp. and yeasts. Kefir is prepared by inoculation of raw milk with kefir grains which consists of different species of yeasts, LAB, acetic acid bacteria in a protein and polysaccharide matrix. The microbiota of boza and kefir is affected from raw materials, the origin and the production methods. In this review, physicochemical properties, manufacturing technologies, microbiota and shelf life and spoilage of traditional fermented beverages were summarized along with how fermentation conditions could affect rheological properties of end product which are important during processing and storage.",
"title": ""
}
] |
[
{
"docid": "b9da5b905cfe701303b627f359c30624",
"text": "Parametric embedding methods such as parametric t-distributed Stochastic Neighbor Embedding (pt-SNE) enables out-of-sample data visualization without further computationally expensive optimization or approximation. However, pt-SNE favors small mini-batches to train a deep neural network but large minibatches to approximate its cost function involving all pairwise data point comparisons, and thus has difficulty in finding a balance. To resolve the conflicts, we present parametric t-distributed stochastic exemplar-centered embedding. Our strategy learns embedding parameters by comparing training data only with precomputed exemplars to indirectly preserve local neighborhoods, resulting in a cost function with significantly reduced computational and memory complexity. Moreover, we propose a shallow embedding network with high-order feature interactions for data visualization, which is much easier to tune but produces comparable performance in contrast to a deep feedforward neural network employed by pt-SNE. We empirically demonstrate, using several benchmark datasets, that our proposed method significantly outperforms pt-SNE in terms of robustness, visual effects, and quantitative evaluations.",
"title": ""
},
{
"docid": "285a1c073ec4712ac735ab84cbcd1fac",
"text": "During a survey of black yeasts of marine origin, some isolates of Hortaea werneckii were recovered from scuba diving equipment, such as silicone masks and snorkel mouthpieces, which had been kept under poor storage conditions. These yeasts were unambiguously identified by phenotypic and genotypic methods. Phylogenetic analysis of both the D1/D2 regions of 26S rRNA gene and ITS-5.8S rRNA gene sequences showed three distinct genetic types. This species is the agent of tinea nigra which is a rarely diagnosed superficial mycosis in Europe. In fact this mycosis is considered an imported fungal infection being much more prevalent in warm, humid parts of the world such as the Central and South Americas, Africa, and Asia. Although H. werneckii has been found in hypersaline environments in Europe, this is the first instance of the isolation of this halotolerant species from scuba diving equipment made with silicone rubber which is used in close contact with human skin and mucous membranes. The occurrence of this fungus in Spain is also an unexpected finding because cases of tinea nigra in this country are practically not seen.",
"title": ""
},
{
"docid": "85d9b0ed2e9838811bf3b07bb31dbeb6",
"text": "In recent years, the medium which has negative index of refraction is widely researched. The medium has both the negative permittivity and the negative permeability. In this paper, we have researched the frequency range widening of negative permeability using split ring resonators.",
"title": ""
},
{
"docid": "65d938eee5da61f27510b334312afe41",
"text": "This paper reviews the actual and potential use of social media in emergency, disaster and crisis situations. This is a field that has generated intense interest. It is characterised by a burgeoning but small and very recent literature. In the emergencies field, social media (blogs, messaging, sites such as Facebook, wikis and so on) are used in seven different ways: listening to public debate, monitoring situations, extending emergency response and management, crowd-sourcing and collaborative development, creating social cohesion, furthering causes (including charitable donation) and enhancing research. Appreciation of the positive side of social media is balanced by their potential for negative developments, such as disseminating rumours, undermining authority and promoting terrorist acts. This leads to an examination of the ethics of social media usage in crisis situations. Despite some clearly identifiable risks, for example regarding the violation of privacy, it appears that public consensus on ethics will tend to override unscrupulous attempts to subvert the media. Moreover, social media are a robust means of exposing corruption and malpractice. In synthesis, the widespread adoption and use of social media by members of the public throughout the world heralds a new age in which it is imperative that emergency managers adapt their working practices to the challenge and potential of this development. At the same time, they must heed the ethical warnings and ensure that social media are not abused or misused when crises and emergencies occur.",
"title": ""
},
{
"docid": "bdf81fccbfa77dadcad43699f815475e",
"text": "The objective of this paper is classifying images by the object categories they contain, for example motorbikes or dolphins. There are three areas of novelty. First, we introduce a descriptor that represents local image shape and its spatial layout, together with a spatial pyramid kernel. These are designed so that the shape correspondence between two images can be measured by the distance between their descriptors using the kernel. Second, we generalize the spatial pyramid kernel, and learn its level weighting parameters (on a validation set). This significantly improves classification performance. Third, we show that shape and appearance kernels may be combined (again by learning parameters on a validation set).\n Results are reported for classification on Caltech-101 and retrieval on the TRECVID 2006 data sets. For Caltech-101 it is shown that the class specific optimization that we introduce exceeds the state of the art performance by more than 10%.",
"title": ""
},
{
"docid": "da540860f3ecb9ca15148a7315b74a45",
"text": "Learning mathematics is one of the most important aspects that determine the future of learners. However, mathematics as one of the subjects is often perceived as being complicated and not liked by the learners. Therefore, we need an application with the use of appropriate technology to create visualization effects which can attract more attention from learners. The application of Augmented Reality technology in digital game is a series of efforts made to create a better visualization effect. In addition, the system is also connected to a leaderboard web service in order to improve the learning motivation through competitive process. Implementation of Augmented Reality is proven to improve student's learning motivation moreover implementation of Augmented Reality in this game is highly preferred by students.",
"title": ""
},
{
"docid": "b3e32f77fde76eba0adfccdc6878a0f3",
"text": "The paper describes a work in progress on humorous response generation for short-text conversation using information retrieval approach. We gathered a large collection of funny tweets and implemented three baseline retrieval models: BM25, the query term reweighting model based on syntactic parsing and named entity recognition, and the doc2vec similarity model. We evaluated these models in two ways: in situ on a popular community question answering platform and in laboratory settings. The approach proved to be promising: even simple search techniques demonstrated satisfactory performance. The collection, test questions, evaluation protocol, and assessors’ judgments create a ground for future research towards more sophisticated models.",
"title": ""
},
{
"docid": "785c716d4f127a5a5fee02bc29aeb352",
"text": "In this paper we propose a novel, improved, phase generated carrier (PGC) demodulation algorithm based on the PGC-differential-cross-multiplying approach (PGC-DCM). The influence of phase modulation amplitude variation and light intensity disturbance (LID) on traditional PGC demodulation algorithms is analyzed theoretically and experimentally. An experimental system for remote no-contact microvibration measurement is set up to confirm the stability of the improved PGC algorithm with LID. In the experiment, when the LID with a frequency of 50 Hz and the depth of 0.3 is applied, the signal-to-noise and distortion ratio (SINAD) of the improved PGC algorithm is 19 dB, higher than the SINAD of the PGC-DCM algorithm, which is 8.7 dB.",
"title": ""
},
{
"docid": "3f8b8ef850aa838289265d175dfa7f1d",
"text": "If competitive equilibrium is defined as a situation in which prices are such that all arbitrage profits are eliminated, is it possible that a competitive economy always be in equilibrium? Clearly not, for then those who arbitrage make no (private) return from their (privately) costly activity. Hence the assumptions that all markets, including that for information, are always in equilibrium and always perfectly arbitraged are inconsistent when arbitrage is costly. We propose here a model in which there is an equilibrium degree of disequilibrium: prices reflect the information of informed individuals (arbitrageurs) but only partially, so that those who expend resources to obtain information do receive compensation. How informative the price system is depends on the number of individuals who are informed; but the number of individuals who are informed is itself an endogenous variable in the model. The model is the simplest one in which prices perform a well-articulated role in conveying information from the informed to the uninformed. When informed individuals observe information that the return to a security is going to be high, they bid its price up, and conversely when they observe information that the return is going to be low. Thus the price system makes publicly available the information obtained by informed individuals to the uniformed. In general, however, it does this imperfectly; this is perhaps lucky, for were it to do it perfectly, an equilibrium would not exist. In the introduction, we shall discuss the general methodology and present some conjectures concerning certain properties of the equilibrium. The remaining analytic sections of the paper are devoted to analyzing in detail an important example of our general model, in which our conjectures concerning the nature of the equilibrium can be shown to be correct. We conclude with a discussion of the implications of our approach and results, with particular emphasis on the relationship of our results to the literature on \"efficient capital markets.\"",
"title": ""
},
{
"docid": "ca8d686b7e0fb3e59508a3b397e8f85e",
"text": "TWIK-related acid-sensitive K(+)-1 (TASK-1 [KCNK3]) and TASK-3 (KCNK9) are tandem pore (K(2P)) potassium (K) channel subunits expressed in carotid bodies and the brainstem. Acidic pH values and hypoxia inhibit TASK-1 and TASK-3 channel function, and halothane enhances this function. These channels have putative roles in ventilatory regulation and volatile anesthetic mechanisms. Doxapram stimulates ventilation through an effect on carotid bodies, and we hypothesized that stimulation might result from inhibition of TASK-1 or TASK-3 K channel function. To address this, we expressed TASK-1, TASK-3, TASK-1/TASK-3 heterodimeric, and TASK-1/TASK-3 chimeric K channels in Xenopus oocytes and studied the effects of doxapram on their function. Doxapram inhibited TASK-1 (half-maximal effective concentration [EC50], 410 nM), TASK-3 (EC50, 37 microM), and TASK-1/TASK-3 heterodimeric channel function (EC50, 9 microM). Chimera studies suggested that the carboxy terminus of TASK-1 is important for doxapram inhibition. Other K2P channels required significantly larger concentrations for inhibition. To test the role of TASK-1 and TASK-3 in halothane-induced immobility, the minimum alveolar anesthetic concentration for halothane was determined and found unchanged in rats receiving doxapram by IV infusion. Our data indicate that TASK-1 and TASK-3 do not play a role in mediating the immobility produced by halothane, although they are plausible molecular targets for the ventilatory effects of doxapram.",
"title": ""
},
{
"docid": "0837c9af9b69367a5a6e32b2f72cef0a",
"text": "Machine learning techniques are increasingly being used in making relevant predictions and inferences on individual subjects neuroimaging scan data. Previous studies have mostly focused on categorical discrimination of patients and matched healthy controls and more recently, on prediction of individual continuous variables such as clinical scores or age. However, these studies are greatly hampered by the large number of predictor variables (voxels) and low observations (subjects) also known as the curse-of-dimensionality or small-n-large-p problem. As a result, feature reduction techniques such as feature subset selection and dimensionality reduction are used to remove redundant predictor variables and experimental noise, a process which mitigates the curse-of-dimensionality and small-n-large-p effects. Feature reduction is an essential step before training a machine learning model to avoid overfitting and therefore improving model prediction accuracy and generalization ability. In this review, we discuss feature reduction techniques used with machine learning in neuroimaging studies.",
"title": ""
},
{
"docid": "e2ed500ce298ea175554af97bd0f2f98",
"text": "The Climate CoLab is a system to help thousands of people around the world collectively develop plans for what humans should do about global climate change. This paper shows how the system combines three design elements (model-based planning, on-line debates, and electronic voting) in a synergistic way. The paper also reports early usage experience showing that: (a) the system is attracting a continuing stream of new and returning visitors from all over the world, and (b) the nascent community can use the platform to generate interesting and high quality plans to address climate change. These initial results indicate significant progress towards an important goal in developing a collective intelligence system—the formation of a large and diverse community collectively engaged in solving a single problem.",
"title": ""
},
{
"docid": "39fe1618fad28ec6ad72d326a1d00f24",
"text": "Popular real-time public events often cause upsurge of traffic in Twitter while the event is taking place. These posts range from real-time update of the event's occurrences highlights of important moments thus far, personal comments and so on. A large user group has evolved who seeks these live updates to get a brief summary of the important moments of the event so far. However, major social search engines including Twitter still present the tweets satisfying the Boolean query in reverse chronological order, resulting in thousands of low quality matches agglomerated in a prosaic manner. To get an overview of the happenings of the event, a user is forced to read scores of uninformative tweets causing frustration. In this paper, we propose a method for multi-tweet summarization of an event. It allows the search users to quickly get an overview about the important moments of the event. We have proposed a graph-based retrieval algorithm that identifies tweets with popular discussion points among the set of tweets returned by Twitter search engine in response to a query comprising the event related keywords. To ensure maximum coverage of topical diversity, we perform topical clustering of the tweets before applying the retrieval algorithm. Evaluation performed by summarizing the important moments of a real-world event revealed that the proposed method could summarize the proceeding of different segments of the event with up to 81.6% precision and up to 80% recall.",
"title": ""
},
{
"docid": "9c1beecda61e50dd278e73c55ca703c8",
"text": "Power MOSFET designs have been moving to higher performance particularly in the medium voltage area. (60V to 300V) New designs require lower Specific On-resistance while not sacrificing Unclamped Inductive Switching (UIS) capability or increasing turn-off losses. Two charge balance technologies currently address these needs, the PN junction and the Shielded Gate Charge Balance device topologies. This paper will study the impact of drift region as well as other design parameters that influence the shielded gate class of charge balance devices. The optimum design for maximizing UIS capability and minimizing the impact on other design parameters such as RDSON and switching performance are addressed. It will be shown through TCAD simulation one can design devices to have a stable avalanche point that is not influenced by small variations within a die or die-to-die that result from normal processing. Finally, measured and simulated data will be presented showing a fabricated device with near theoretical UIS capability.",
"title": ""
},
{
"docid": "5c111a5a30f011e4f47fb9e2041644f9",
"text": "Since the audio recapture can be used to assist audio splicing, it is important to identify whether a suspected audio recording is recaptured or not. However, few works on such detection have been reported. In this paper, we propose an method to detect the recaptured audio based on deep learning and we investigate two deep learning techniques, i.e., neural network with dropout method and stack auto-encoders (SAE). The waveform samples of audio frame is directly used as the input for the deep neural network. The experimental results show that error rate around 7.5% can be achieved, which indicates that our proposed method can successfully discriminate recaptured audio and original audio.",
"title": ""
},
{
"docid": "b38529e74442de80822204b63d061e3e",
"text": "Factors other than age and genetics may increase the risk of developing Alzheimer disease (AD). Accumulation of the amyloid-β (Aβ) peptide in the brain seems to initiate a cascade of key events in the pathogenesis of AD. Moreover, evidence is emerging that the sleep–wake cycle directly influences levels of Aβ in the brain. In experimental models, sleep deprivation increases the concentration of soluble Aβ and results in chronic accumulation of Aβ, whereas sleep extension has the opposite effect. Furthermore, once Aβ accumulates, increased wakefulness and altered sleep patterns develop. Individuals with early Aβ deposition who still have normal cognitive function report sleep abnormalities, as do individuals with very mild dementia due to AD. Thus, sleep and neurodegenerative disease may influence each other in many ways that have important implications for the diagnosis and treatment of AD.",
"title": ""
},
{
"docid": "fd97b7130c7d1828566422f49c857db5",
"text": "The phase noise of phase/frequency detectors can significantly raise the in-band phase noise of frequency synthesizers, corrupting the modulated signal. This paper analyzes the phase noise mechanisms in CMOS phase/frequency detectors and applies the results to two different topologies. It is shown that an octave increase in the input frequency raises the phase noise by 6 dB if flicker noise is dominant and by 3 dB if white noise is dominant. An optimization methodology is also proposed that lowers the phase noise by 4 to 8 dB for a given power consumption. Simulation and analytical results agree to within 3.1 dB for the two topologies at different frequencies.",
"title": ""
},
{
"docid": "47c5f3a7230ac19b8889ced2d8f4318a",
"text": "This paper deals with the setting parameter optimization procedure for a multi-phase induction heating system considering transverse flux heating. This system is able to achieve uniform static heating of different thin/size metal pieces without movable inductor parts, yokes or magnetic screens. The goal is reached by the predetermination of the induced power density distribution using an optimization procedure that leads to the required inductor supplying currents. The purpose of the paper is to describe the optimization program with the different solution obtained and to show that some compromise must be done between the accuracy of the temperature profile and the energy consumption.",
"title": ""
},
{
"docid": "2baa441b3daf9736154dd19864ec2497",
"text": "In some stochastic environments the well-known reinforcement learning algorithm Q-learning performs very poorly. This poor performance is caused by large overestimations of action values. These overestimations result from a positive bias that is introduced because Q-learning uses the maximum action value as an approximation for the maximum expected action value. We introduce an alternative way to approximate the maximum expected value for any set of random variables. The obtained double estimator method is shown to sometimes underestimate rather than overestimate the maximum expected value. We apply the double estimator to Q-learning to construct Double Q-learning, a new off-policy reinforcement learning algorithm. We show the new algorithm converges to the optimal policy and that it performs well in some settings in which Q-learning performs poorly due to its overestimation.",
"title": ""
},
{
"docid": "7b314cd0c326cb977b92f4907a0ed737",
"text": "This is the third part of a series of papers that provide a comprehensive survey of the techniques for tracking maneuvering targets without addressing the so-called measurement-origin uncertainty. Part I [1] and Part II [2] deal with general target motion models and ballistic target motion models, respectively. This part surveys measurement models, including measurement model-based techniques, used in target tracking. Models in Cartesian, sensor measurement, their mixed, and other coordinates are covered. The stress is on more recent advances — topics that have received more attention recently are discussed in greater details.",
"title": ""
}
] |
scidocsrr
|
bb090e623e20242028023fecb3d439eb
|
Deep Learning with Nonparametric Clustering
|
[
{
"docid": "11ce5da16cf0c0c6cfb85e0d0bbdc13e",
"text": "Recently, fully-connected and convolutional neural networks have been trained to reach state-of-the-art performance on a wide variety of tasks such as speech recognition, image classification, natural language processing, and bioinformatics data. For classification tasks, much of these “deep learning” models employ the softmax activation functions to learn output labels in 1-of-K format. In this paper, we demonstrate a small but consistent advantage of replacing softmax layer with a linear support vector machine. Learning minimizes a margin-based loss instead of the cross-entropy loss. In almost all of the previous works, hidden representation of deep networks are first learned using supervised or unsupervised techniques, and then are fed into SVMs as inputs. In contrast to those models, we are proposing to train all layers of the deep networks by backpropagating gradients through the top level SVM, learning features of all layers. Our experiments show that simply replacing softmax with linear SVMs gives significant gains on datasets MNIST, CIFAR-10, and the ICML 2013 Representation Learning Workshop’s face expression recognition challenge.",
"title": ""
},
{
"docid": "e8a78557974794594acb1f0cafb93be4",
"text": "In a Bayesian mixture model it is not necessary a priori to limit the number of components to be finite. In this paper an infinite Gaussian mixture model is presented which neatly sidesteps the difficult problem of finding the “right” number of mixture components. Inference in the model is done using an efficient parameter-free Markov Chain that relies entirely on Gibbs sampling.",
"title": ""
},
{
"docid": "693e935d405b255ac86b8a9f5e7852a3",
"text": "Recent developments have demonstrated the capacity of rest rict d Boltzmann machines (RBM) to be powerful generative models, able to extract useful featu r s from input data or construct deep artificial neural networks. In such settings, the RBM only yields a preprocessing or an initialization for some other model, instead of acting as a complete supervised model in its own right. In this paper, we argue that RBMs can provide a self-contained framework fo r developing competitive classifiers. We study the Classification RBM (ClassRBM), a variant on the R BM adapted to the classification setting. We study different strategies for training the Cla ssRBM and show that competitive classification performances can be reached when appropriately com bining discriminative and generative training objectives. Since training according to the gener ative objective requires the computation of a generally intractable gradient, we also compare differen t approaches to estimating this gradient and address the issue of obtaining such a gradient for proble ms with very high dimensional inputs. Finally, we describe how to adapt the ClassRBM to two special cases of classification problems, namely semi-supervised and multitask learning.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "232d7e7986de374499c8ca580d055729",
"text": "In this paper we provide a survey of recent contributions to robust portfolio strategies from operations research and finance to the theory of portfolio selection. Our survey covers results derived not only in terms of the standard mean-variance objective, but also in terms of two of the most popular risk measures, mean-VaR and mean-CVaR developed recently. In addition, we review optimal estimation methods and Bayesian robust approaches.",
"title": ""
},
{
"docid": "f3dcf620edb77a199b2ad9d2410cc858",
"text": "As the amount of digital data grows, so does the theft of sensitive data through the loss or misplacement of laptops, thumb drives, external hard drives, and other electronic storage media. Sensitive data may also be leaked accidentally due to improper disposal or resale of storage media. To protect the secrecy of the entire data lifetime, we must have confidential ways to store and delete data. This survey summarizes and compares existing methods of providing confidential storage and deletion of data in personal computing environments.",
"title": ""
},
{
"docid": "ec377000353bce311c0887cd4edab554",
"text": "This paper explains various security issues in the existing home automation systems and proposes the use of logic-based security algorithms to improve home security. This paper classifies natural access points to a home as primary and secondary access points depending on their use. Logic-based sensing is implemented by identifying normal user behavior at these access points and requesting user verification when necessary. User position is also considered when various access points changed states. Moreover, the algorithm also verifies the legitimacy of a fire alarm by measuring the change in temperature, humidity, and carbon monoxide levels, thus defending against manipulative attackers. The experiment conducted in this paper used a combination of sensors, microcontrollers, Raspberry Pi and ZigBee communication to identify user behavior at various access points and implement the logical sensing algorithm. In the experiment, the proposed logical sensing algorithm was successfully implemented for a month in a studio apartment. During the course of the experiment, the algorithm was able to detect all the state changes of the primary and secondary access points and also successfully verified user identity 55 times generating 14 warnings and 5 alarms.",
"title": ""
},
{
"docid": "55b967cd6d28082ba0fa27605f161060",
"text": "Background. A scheme for format-preserving encryption (FPE) is supposed to do that which a conventional (possibly tweakable) blockcipher does—encipher messages within some message space X—except that message space, instead of being something like X = {0, 1}128, is more gen eral [1, 3]. For example, the message space might be the set X = {0, 1, . . . , 9}16, in which case each 16-digit plaintext X ∈ X gets enciphered into a 16-digit ciphertext Y ∈ X . In a stringbased FPE scheme—the only type of FPE that we consider here—the message space is of the form n X = {0, 1, . . . , radix − 1} for some message length n and alphabet size radix.",
"title": ""
},
{
"docid": "4edb9dea1e949148598279c0111c4531",
"text": "This paper presents a design of highly effective triple band microstrip antenna for wireless communication applications. The triple band design is a metamaterial-based design for WLAN and WiMAX (2.4/3.5/5.6 GHz) applications. The triple band response is obtained by etching two circular and one rectangular split ring resonator (SRR) unit cells on the ground plane of a conventional patch operating at 3.56 GHz. The circular cells are introduced to resonate at 5.3 GHz for the upper WiMAX band, while the rectangular cell is designed to resonate at 2.45 GHz for the lower WLAN band. Furthermore, a novel complementary H-shaped unit cell oriented above the triple band antenna is proposed. The proposed H-shaped is being used as a lens to significantly increase the antenna gain. To investigate the left-handed behavior of the proposed H-shaped, extensive parametric study for the placement of each unit cell including the metamaterial lens, which is the main parameter affecting the antenna performance, is presented and discussed comprehensively. Good consistency between the measured and simulated results is achieved. The proposed antenna meets the requirements of WiMAX and WLAN standards with high peak realized gain.",
"title": ""
},
{
"docid": "6544cffbaf9cc0c6c12991c2acbe2dd5",
"text": "The aim of this updated statement is to provide comprehensive and timely evidence-based recommendations on the prevention of ischemic stroke among survivors of ischemic stroke or transient ischemic attack. Evidence-based recommendations are included for the control of risk factors, interventional approaches for atherosclerotic disease, antithrombotic treatments for cardioembolism, and the use of antiplatelet agents for noncardioembolic stroke. Further recommendations are provided for the prevention of recurrent stroke in a variety of other specific circumstances, including arterial dissections; patent foramen ovale; hyperhomocysteinemia; hypercoagulable states; sickle cell disease; cerebral venous sinus thrombosis; stroke among women, particularly with regard to pregnancy and the use of postmenopausal hormones; the use of anticoagulation after cerebral hemorrhage; and special approaches to the implementation of guidelines and their use in high-risk populations.",
"title": ""
},
{
"docid": "1ea2074181341aaa112a678d75ec5de7",
"text": "5 Evacuation planning and scheduling is a critical aspect of disaster management and national security applications. This paper proposes a conflict-based path-generation approach for evacuation planning. Its key idea is to decompose the evacuation planning problem into a master and a subproblem. The subproblem generates new evacuation paths for each evacuated area, while the master problem optimizes the flow of evacuees and produce an evacuation plan. Each new path is generated to remedy conflicts in the evacuation flows and adds new columns and a new row in the master problem. The algorithm is applied to a set of large-scale evacuation scenarios ranging from the Hawkesbury-Nepean flood plain (West Sydney, Australia) which require evacuating in the order of 70,000 persons, to the New Orleans metropolitan area and its 1,000,000 residents. Experiments illustrate the scalability of the approach which is able to produce evacuation for scenarios with more than 1,200 nodes, while a direct Mixed Integer Programming formulation becomes intractable for instances with more than 5 nodes. With this approach, realistic evacuations scenarios can be solved near-optimally in reasonable time, supporting both evacuation planning in strategic, tactical, and operational environments.",
"title": ""
},
{
"docid": "3ac230304ab65efa3c31b10dc0dffa4d",
"text": "Current networking integrates common \"Things\" to the Web, creating the Internet of Things (IoT). The considerable number of heterogeneous Things that can be part of an IoT network demands an efficient management of resources. With the advent of Fog computing, some IoT management tasks can be distributed toward the edge of the constrained networks, closer to physical devices. Blockchain protocols hosted on Fog networks can handle IoT management tasks such as communication, storage, and authentication. This research goes beyond the current definition of Things and presents the Internet of \"Smart Things.\" Smart Things are provisioned with Artificial Intelligence (AI) features based on CLIPS programming language to become self-inferenceable and self-monitorable. This work uses the permission-based blockchain protocol Multichain to communicate many Smart Things by reading and writing blocks of information. This paper evaluates Smart Things deployed on Edison Arduino boards. Also, this work evaluates Multichain hosted on a Fog network.",
"title": ""
},
{
"docid": "976507b0b89c2202ab603ccedae253f5",
"text": "We present a natural language generator based on the sequence-to-sequence approach that can be trained to produce natural language strings as well as deep syntax dependency trees from input dialogue acts, and we use it to directly compare two-step generation with separate sentence planning and surface realization stages to a joint, one-step approach. We were able to train both setups successfully using very little training data. The joint setup offers better performance, surpassing state-of-the-art with regards to ngram-based scores while providing more relevant outputs.",
"title": ""
},
{
"docid": "0105247ab487c2d06f3ffa0d00d4b4f9",
"text": "Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput.",
"title": ""
},
{
"docid": "ac34478a54d67abce7c892e058295e63",
"text": "The popularity of the term \"integrated curriculum\" has grown immensely in medical education over the last two decades, but what does this term mean and how do we go about its design, implementation, and evaluation? Definitions and application of the term vary greatly in the literature, spanning from the integration of content within a single lecture to the integration of a medical school's comprehensive curriculum. Taking into account the integrated curriculum's historic and evolving base of knowledge and theory, its support from many national medical education organizations, and the ever-increasing body of published examples, we deem it necessary to present a guide to review and promote further development of the integrated curriculum movement in medical education with an international perspective. We introduce the history and theory behind integration and provide theoretical models alongside published examples of common variations of an integrated curriculum. In addition, we identify three areas of particular need when developing an ideal integrated curriculum, leading us to propose the use of a new, clarified definition of \"integrated curriculum\", and offer a review of strategies to evaluate the impact of an integrated curriculum on the learner. This Guide is presented to assist educators in the design, implementation, and evaluation of a thoroughly integrated medical school curriculum.",
"title": ""
},
{
"docid": "d529d1052fce64ae05fbc64d2b0450ab",
"text": "Today, many industrial companies must face problems raised by maintenance. In particular, the anomaly detection problem is probably one of the most challenging. In this paper we focus on the railway maintenance task and propose to automatically detect anomalies in order to predict in advance potential failures. We first address the problem of characterizing normal behavior. In order to extract interesting patterns, we have developed a method to take into account the contextual criteria associated to railway data (itinerary, weather conditions, etc.). We then measure the compliance of new data, according to extracted knowledge, and provide information about the seriousness and the exact localization of a detected anomaly. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "70c6da9da15ad40b4f64386b890ccf51",
"text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "362b0fc349c827316116a620da34ac91",
"text": "Identifying and correcting grammatical errors in the text written by non-native writers have received increasing attention in recent years. Although a number of annotated corpora have been established to facilitate data-driven grammatical error detection and correction approaches, they are still limited in terms of quantity and coverage because human annotation is labor-intensive, time-consuming, and expensive. In this work, we propose to utilize unlabeled data to train neural network based grammatical error detection models. The basic idea is to cast error detection as a binary classification problem and derive positive and negative training examples from unlabeled data. We introduce an attention-based neural network to capture long-distance dependencies that influence the word being detected. Experiments show that the proposed approach significantly outperforms SVM and convolutional networks with fixed-size context window.",
"title": ""
},
{
"docid": "20d02454fd850d8a7e05123a1769d44b",
"text": "We describe the extension and objective evaluation of a network of semantically related noun senses (or concepts) that has been automatically acquired by analyzing lexical cooccurrence in Wikipedia. The acquisition process makes no use of the metadata or links that have been manually built into the encyclopedia, and nouns in the network are automatically disambiguated to their corresponding noun senses without supervision. For this task, we use the noun sense inventory of WordNet 3.0. Thus, this work can be conceived of as augmenting the WordNet noun ontology with unweighted, undirected relatedto edges between synsets. Our network contains 208,832 such edges. We evaluate our network’s performance on a word sense disambiguation (WSD) task and show: a) the network is competitive with WordNet when used as a stand-alone knowledge source for two WSD algorithms; b) combining our network with WordNet achieves disambiguation results that exceed the performance of either resource individually; and c) our network outperforms a similar resource that has been automatically derived from semantic annotations in the Wikipedia corpus.",
"title": ""
},
{
"docid": "4be5f35876daebc0c00528bede15b66c",
"text": "Information Extraction (IE) is concerned with mining factual structures from unstructured text data, including entity and relation extraction. For example, identifying Donald Trump as “person” and Washington D.C. as “location”, and understand the relationship between them (say, Donald Trump spoke at Washington D.C.), from a specific sentence. Typically, IE systems rely on large amount of training data, primarily acquired via human annotation, to achieve the best performance. But since human annotation is costly and non-scalable, the focus has shifted to adoption of a new strategy Distant Supervision [1]. Distant supervision is a technique that can automatically extract labeled training data from existing knowledge bases without human efforts. However the training data generated by distant supervision is context-agnostic and can be very noisy. Moreover, we also observe the difference between the quality of training examples in terms of to what extent it infers the target entity/relation type. In this project, we focus on removing the noise and identifying the quality difference in the training data generated by distant supervision, by leveraging the feedback signals from one of IE’s downstream applications, QA, to improve the performance of one of the state-of-the-art IE framework, CoType [3]. Keywords—Data Mining, Relation Extraction, Question Answering.",
"title": ""
},
{
"docid": "158b554ee5aedcbee9136dcde010dc30",
"text": "In this paper, we propose a novel progressive parameter pruning method for Convolutional Neural Network acceleration, named Structured Probabilistic Pruning (SPP), which effectively prunes weights of convolutional layers in a probabilistic manner. Unlike existing deterministic pruning approaches, where unimportant weights are permanently eliminated, SPP introduces a pruning probability for each weight, and pruning is guided by sampling from the pruning probabilities. A mechanism is designed to increase and decrease pruning probabilities based on importance criteria in the training process. Experiments show that, with 4× speedup, SPP can accelerate AlexNet with only 0.3% loss of top-5 accuracy and VGG-16 with 0.8% loss of top-5 accuracy in ImageNet classification. Moreover, SPP can be directly applied to accelerate multi-branch CNN networks, such as ResNet, without specific adaptations. Our 2× speedup ResNet-50 only suffers 0.8% loss of top-5 accuracy on ImageNet. We further show the effectiveness of SPP on transfer learning tasks.",
"title": ""
},
{
"docid": "d1357b2e247d521000169dce16f182ee",
"text": "Camera shake or target movement often leads to undesired blur effects in videos captured by a hand-held camera. Despite significant efforts having been devoted to video-deblur research, two major challenges remain: 1) how to model the spatio-temporal characteristics across both the spatial domain (i.e., image plane) and the temporal domain (i.e., neighboring frames) and 2) how to restore sharp image details with respect to the conventionally adopted metric of pixel-wise errors. In this paper, to address the first challenge, we propose a deblurring network (DBLRNet) for spatial-temporal learning by applying a 3D convolution to both the spatial and temporal domains. Our DBLRNet is able to capture jointly spatial and temporal information encoded in neighboring frames, which directly contributes to the improved video deblur performance. To tackle the second challenge, we leverage the developed DBLRNet as a generator in the generative adversarial network (GAN) architecture and employ a content loss in addition to an adversarial loss for efficient adversarial training. The developed network, which we name as deblurring GAN, is tested on two standard benchmarks and achieves the state-of-the-art performance.",
"title": ""
},
{
"docid": "88dd795c6d1fa37c13fbf086c0eb0e37",
"text": "We propose a method to obtain a complete and accurate 3D model from multiview images captured under a variety of unknown illuminations. Based on recent results showing that for Lambertian objects, general illumination can be approximated well using low-order spherical harmonics, we develop a robust alternating approach to recover surface normals. Surface normals are initialized using a multi-illumination multiview stereo algorithm, then refined using a robust alternating optimization method based on the ℓ1 metric. Erroneous normal estimates are detected using a shape prior. Finally, the computed normals are used to improve the preliminary 3D model. The reconstruction system achieves watertight and robust 3D reconstruction while neither requiring manual interactions nor imposing any constraints on the illumination. Experimental results on both real world and synthetic data show that the technique can acquire accurate 3D models for Lambertian surfaces, and even tolerates small violations of the Lambertian assumption.",
"title": ""
}
] |
scidocsrr
|
87138638e9e8a41ab72e8795a3b19ac5
|
Trajectory tracking control for hovering and acceleration maneuver of Quad Tilt Rotor UAV
|
[
{
"docid": "bba979cd5d69dac380ba1023441460d3",
"text": "This paper presents a model of a particular class of a convertible MAV with fixed wings. This vehicle can operate as a helicopter as well as a conventional airplane, i.e. the aircraft is able to switch their flight configuration from hover to level flight and vice versa by means of a transition maneuver. The paper focuses on finding a controller capable of performing such transition via the tilting of their four rotors. The altitude should remain on a predefined value throughout the transition stage. For this purpose a nonlinear control strategy based on saturations and Lyapunov design is given. The use of this control law enables to make the transition maneuver while maintaining the aircraft in flight. Numerical results are presented, showing the effectiveness of the proposed methodology to deal with the transition stage.",
"title": ""
}
] |
[
{
"docid": "1fd0f4fd2d63ef3a71f8c56ce6a25fb5",
"text": "A new ‘growing’ maximum likelihood classification algorithm for small reservoir delineation has been developed and is tested with Radarsat-2 data for reservoirs in the semi-arid Upper East Region, Ghana. The delineation algorithm is able to find the land-water boundary from SAR imagery for different weather and environmental conditions. As such, the algorithm allows for remote sensed operational monitoring of small reservoirs.",
"title": ""
},
{
"docid": "8df2c8cf6f6662ed60280b8777c64336",
"text": "In comparative genomics, functional annotations are transferred from one organism to another relying on sequence similarity. With more than 20 million citations in PubMed, text mining provides the ideal tool for generating additional large-scale homology-based predictions. To this end, we have refined a recent dataset of biomolecular events extracted from text, and integrated these predictions with records from public gene databases. Accounting for lexical variation of gene symbols, we have implemented a disambiguation algorithm that uniquely links the arguments of 11.2 million biomolecular events to well-defined gene families, providing interesting opportunities for query expansion and hypothesis generation. The resulting MySQL database, including all 19.2 million original events as well as their homology-based variants, is publicly available at http://bionlp.utu.fi/.",
"title": ""
},
{
"docid": "cad2742f731edaf67924ce002d9a1f94",
"text": "Output impedance of active-clamp converters is a valid method to achieve current sharing among parallel-connected power stages. Nevertheless, parasitic capacitances result in resonances that modify converter behavior and current balance. A solution is presented and validated. The current balance is achieved without a dedicated control.",
"title": ""
},
{
"docid": "724734077fbc469f1bbcad4d7c3b0cbc",
"text": "Most efforts to improve cyber security focus primarily on incorporating new technological approaches in products and processes. However, a key element of improvement involves acknowledging the importance of human behavior when designing, building and using cyber security technology. In this survey paper, we describe why incorporating an understanding of human behavior into cyber security products and processes can lead to more effective technology. We present two examples: the first demonstrates how leveraging behavioral science leads to clear improvements, and the other illustrates how behavioral science offers the potential for significant increases in the effectiveness of cyber security. Based on feedback collected from practitioners in preliminary interviews, we narrow our focus to two important behavioral aspects: cognitive load and bias. Next, we identify proven and potential behavioral science findings that have cyber security relevance, not only related to cognitive load and bias but also to heuristics and behavioral science models. We conclude by suggesting several next steps for incorporating behavioral science findings in our technological design, development and use.",
"title": ""
},
{
"docid": "f753712eed9e5c210810d2afd1366eb8",
"text": "To improve FPGA performance for arithmetic circuits that are dominated by multi-input addition operations, an FPGA logic block is proposed that can be configured as a 6:2 or 7:2 compressor. Compressors have been used successfully in the past to realize parallel multipliers in VLSI technology; however, the peculiar structure of FPGA logic blocks, coupled with the high cost of the routing network relative to ASIC technology, renders compressors ineffective when mapped onto the general logic of an FPGA. On the other hand, current FPGA logic cells have already been enhanced with carry chains to improve arithmetic functionality, for example, to realize fast ternary carry-propagate addition. The contribution of this article is a new FPGA logic cell that is specialized to help realize efficient compressor trees on FPGAs. The new FPGA logic cell has two variants that can respectively be configured as a 6:2 or a 7:2 compressor using additional carry chains that, coupled with lookup tables, provide the necessary functionality. Experiments show that the use of these modified logic cells significantly reduces the delay of compressor trees synthesized on FPGAs compared to state-of-the-art synthesis techniques, with a moderate increase in area and power consumption.",
"title": ""
},
{
"docid": "7956e5fd3372716cb5ae16c6f9e846fb",
"text": "Understanding query intent helps modern search engines to improve search results as well as to display instant answers to the user. In this work, we introduce an accurate query classification method to detect the intent of a user search query. We propose using convolutional neural networks (CNN) to extract query vector representations as the features for the query classification. In this model, queries are represented as vectors so that semantically similar queries can be captured by embedding them into a vector space. Experimental results show that the proposed method can effectively detect intents of queries with higher precision and recall compared to current methods.",
"title": ""
},
{
"docid": "bafdfa2ecaeb18890ab8207ef1bc4f82",
"text": "This content analytic study investigated the approaches of two mainstream newspapers—The New York Times and the Chicago Tribune—to cover the gay marriage issue. The study used the Massachusetts legitimization of gay marriage as a dividing point to look at what kinds of specific political or social topics related to gay marriage were highlighted in the news media. The study examined how news sources were framed in the coverage of gay marriage, based upon the newspapers’ perspectives and ideologies. The results indicated that The New York Times was inclined to emphasize the topic of human equality related to the legitimization of gay marriage. After the legitimization, The New York Times became an activist for gay marriage. Alternatively, the Chicago Tribune highlighted the importance of human morality associated with the gay marriage debate. The perspective of the Chicago Tribune was not dramatically influenced by the legitimization. It reported on gay marriage in terms of defending American traditions and family values both before and after the gay marriage legitimization. Published by Elsevier Inc on behalf of Western Social Science Association. Gay marriage has been a controversial issue in the United States, especially since the Massachusetts Supreme Judicial Court officially authorized it. Although the practice has been widely discussed for several years, the acceptance of gay marriage does not seem to be concordant with mainstream American values. This is in part because gay marriage challenges the traditional value of the family institution. In the United States, people’s perspectives of and attitudes toward gay marriage have been mostly polarized. Many people optimistically ∗ Corresponding author. E-mail addresses: [email protected], [email protected] (P.-L. Pan). 0362-3319/$ – see front matter. Published by Elsevier Inc on behalf of Western Social Science Association. doi:10.1016/j.soscij.2010.02.002 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 631 support gay legal rights and attempt to legalize it in as many states as possible, while others believe legalizing homosexuality may endanger American society and moral values. A number of forces and factors may expand this divergence between the two polarized perspectives, including family, religion and social influences. Mass media have a significant influence on socialization that cultivates individual’s belief about the world as well as affects individual’s values on social issues (Comstock & Paik, 1991). Moreover, news media outlets become a strong factor in influencing people’s perceptions of and attitudes toward gay men and lesbians because the news is one of the most powerful media to influence people’s attitudes toward gay marriage (Anderson, Fakhfakh, & Kondylis, 1999). Some mainstream newspapers are considered as media elites (Lichter, Rothman, & Lichter, 1986). Furthermore, numerous studies have demonstrated that mainstream newspapers would produce more powerful influences on people’s perceptions of public policies and political issues than television news (e.g., Brians & Wattenberg, 1996; Druckman, 2005; Eveland, Seo, & Marton, 2002) Gay marriage legitimization, a specific, divisive issue in the political and social dimensions, is concerned with several political and social issues that have raised fundamental questions about Constitutional amendments, equal rights, and American family values. The role of news media becomes relatively important while reporting these public debates over gay marriage, because not only do the news media affect people’s attitudes toward gays and lesbians by positively or negatively reporting the gay and lesbian issue, but also shape people’s perspectives of the same-sex marriage policy by framing the recognition of gay marriage in the news coverage. The purpose of this study is designed to examine how gay marriage news is described in the news coverage of The New York Times and the Chicago Tribune based upon their divisive ideological framings. 1. Literature review 1.1. Homosexual news coverage over time Until the 1940s, news media basically ignored the homosexual issue in the United States (Alwood, 1996; Bennett, 1998). According to Bennett (1998), of the 356 news stories about gays and lesbians that appeared in Time and Newsweek from 1947 to 1997, the Kinsey report on male sexuality published in 1948 was the first to draw reporters to the subject of homosexuality. From the 1940s to 1950s, the homosexual issue was reported as a social problem. Approximately 60% of the articles described homosexuals as a direct threat to the strength of the U.S. military, the security of the U.S. government, and the safety of ordinary Americans during this period. By the 1960s, the gay and lesbian issue began to be discussed openly in the news media. However, these portrayals were covered in the context of crime stories and brief items that ridiculed effeminate men or masculine women (Miller, 1991; Streitmatter, 1993). In 1963, a cover story, “Let’s Push Homophile Marriage,” was the first to treat gay marriage as a matter of winning legal recognition (Stewart-Winter, 2006). However, this cover story did not cause people to pay positive attention to gay marriage, but raised national debates between punishment and pity of homosexuals. Specifically speaking, although numerous arti632 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 cles reported before the 1960s provided growing visibility for homosexuals, they were still highly critical of them (Bennett, 1998). In September 1967, the first hard-hitting gay newspaper—the Los Angeles Advocate—began publication. Different from other earlier gay and lesbian publications, its editorial mix consisted entirely of non-fiction materials, including news stories, editorials, and columns (Cruikshank, 1992; Streitmatter, 1993). The Advocate was the first gay publication to operate as an independent business financed entirely by advertising and circulation, rather than by subsidies from a membership organization (Streitmatter, 1995a, 1995b). After the Stonewall Rebellion in June 1969 in New York City ignited the modern phase of the gay and lesbian liberation movement, the number and circulation of the gay and lesbian press exploded (Streitmatter, 1998). Therefore, gay rights were discussed in the news media during the early 1970s. Homosexuals began to organize a series of political actions associated with gay rights, which was widely covered by the news media, while a backlash also appeared against the gay-rights movements, particularly among fundamentalist Christians (Alwood, 1996; Bennett, 1998). Later in the 1970s, the genre entered a less political phrase by exploring the dimensions of the developing culture of gay and lesbian. The news media plumbed the breadth and depth of topics ranging from the gay and lesbian sensibility in art and literature to sex, spirituality, personal appearance, dyke separatism, lesbian mothers, drag queen, leather men, and gay bathhouses (Streitmatter, 1995b). In the 1980s, the gay and lesbian issue confronted a most formidable enemy when AIDS/HIV, one of the most devastating diseases in the history of medicine, began killing gay men at an alarming rate. Accordingly, AIDS/HIV became the biggest gay story reported by the news media. Numerous news media outlets linked the AIDS/HIV epidemic with homosexuals, which implied the notion of the promiscuous gay and lesbian lifestyle. The gays and lesbians, therefore, were described as a dangerous minority in the news media during the 1980s (Altman, 1986; Cassidy, 2000). In the 1990s, issues about the growing visibility of gays and lesbians and their campaign for equal rights were frequently covered in the news media, primarily because of AIDS and the debate over whether the ban on gays in the military should be lifted. The increasing visibility of gay people resulted in the emergence of lifestyle magazines (Bennett, 1998; Streitmatter, 1998). The Out, a lifestyle magazine based in New York City but circulated nationally, led the new phase, since its upscale design and fashion helped attract mainstream advertisers. This magazine, which devalued news in favor of stories on entertainment and fashions, became the first gay and lesbian publication sold in mainstream bookstores and featured on the front page of The New York Times (Streitmatter, 1998). From the late 1990s to the first few years of the 2000s, homosexuals were described as a threat to children’s development as well as a danger to family values in the news media. The legitimacy of same-sex marriage began to be discussed, because news coverage dominated the issue of same-sex marriage more frequently than before (Bennett, 1998). According to Gibson (2004), The New York Times first announced in August 2002 that its Sunday Styles section would begin publishing reports of same-sex commitment ceremonies along with the traditional heterosexual wedding announcements. Moreover, many newspapers joined this trend. Gibson (2004) found that not only the national newspapers, such as The New York Times, but also other regional newspapers, such as the Houston Chronicle and the Seattle Times, reported surprisingly large P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 633 number of news stories about the everyday lives of gays and lesbians, especially since the Massachusetts Supreme Judicial Court ruled in November 2003 that same-sex couples had the same right to marry as heterosexuals. Previous studies investigated the increased amount of news coverage of gay and lesbian issues in the past six decades, but they did not analyze how homosexuals are framed in the news media in terms of public debates on the gay marriage issue. These studies failed to examine how newspapers report this national debate on gay marriage as well as what kinds of news frames are used in reporting this controversial issue. 1.2. Framing gay and lesbian partnersh",
"title": ""
},
{
"docid": "11d06fb5474df44a6bc733bd5cd1263d",
"text": "Understanding how materials that catalyse the oxygen evolution reaction (OER) function is essential for the development of efficient energy-storage technologies. The traditional understanding of the OER mechanism on metal oxides involves four concerted proton-electron transfer steps on metal-ion centres at their surface and product oxygen molecules derived from water. Here, using in situ 18O isotope labelling mass spectrometry, we provide direct experimental evidence that the O2 generated during the OER on some highly active oxides can come from lattice oxygen. The oxides capable of lattice-oxygen oxidation also exhibit pH-dependent OER activity on the reversible hydrogen electrode scale, indicating non-concerted proton-electron transfers in the OER mechanism. Based on our experimental data and density functional theory calculations, we discuss mechanisms that are fundamentally different from the conventional scheme and show that increasing the covalency of metal-oxygen bonds is critical to trigger lattice-oxygen oxidation and enable non-concerted proton-electron transfers during OER.",
"title": ""
},
{
"docid": "988b56fdbfd0fbb33bb715adb173c63c",
"text": "This paper presents a new sensing system for home-based rehabilitation based on optical linear encoder (OLE), in which the motion of an optical encoder on a code strip is converted to the limb joints' goniometric data. A body sensing module was designed, integrating the OLE and an accelerometer. A sensor network of three sensing modules was established via controller area network bus to capture human arm motion. Experiments were carried out to compare the performance of the OLE module with that of commercial motion capture systems such as electrogoniometers and fiber-optic sensors. The results show that the inexpensive and simple-design OLE's performance is comparable to that of expensive systems. Moreover, a statistical study was conducted to confirm the repeatability and reliability of the sensing system. The OLE-based system has strong potential as an inexpensive tool for motion capture and arm-function evaluation for short-term as well as long-term home-based monitoring.",
"title": ""
},
{
"docid": "d2b545b4f9c0e7323760632c65206480",
"text": "This brief presents a quantitative analysis of the operating characteristics of three-phase diode bridge rectifiers with ac-side reactance and constant-voltage loads. We focus on the case where the ac-side currents vary continuously (continuous ac-side conduction mode). This operating mode is of particular importance in alternators and generators, for example. Simple approximate expressions are derived for the line and output current characteristics as well as the input power factor. Expressions describing the necessary operating conditions for continuous ac-side conduction are also developed. The derived analytical expressions are applied to practical examples and both simulations and experimental results are utilized to validate the analytical results. It is shown that the derived expressions are far more accurate than calculations based on traditional constant-current models.",
"title": ""
},
{
"docid": "89dd97465c8373bb9dabf3cbb26a4448",
"text": "Unidirectional connections from the cortex to the matrix of the corpus striatum initiate the cortico-basal ganglia (BG)-thalamocortical loop, thought to be important in momentary action selection and in longer-term fine tuning of behavioural repertoire; a discrete set of striatal compartments, striosomes, has the complementary role of registering or anticipating reward that shapes corticostriatal plasticity. Re-entrant signals traversing the cortico-BG loop impact predominantly frontal cortices, conveyed through topographically ordered output channels; by contrast, striatal input signals originate from a far broader span of cortex, and are far more divergent in their termination. The term ‘disclosed loop’ is introduced to describe this organisation: a closed circuit that is open to outside influence at the initial stage of cortical input. The closed circuit component of corticostriatal afferents is newly dubbed ‘operative’, as it is proposed to establish the bid for action selection on the part of an incipient cortical action plan; the broader set of converging corticostriatal afferents is described as contextual. A corollary of this proposal is that every unit of the striatal volume, including the long, C-shaped tail of the caudate nucleus, should receive a mandatory component of operative input, and hence include at least one area of BG-recipient cortex amongst the sources of its corticostriatal afferents. Individual operative afferents contact twin classes of GABAergic striatal projection neuron (SPN), distinguished by their neurochemical character, and onward circuitry. This is the basis of the classic direct and indirect pathway model of the cortico-BG loop. Each pathway utilises a serial chain of inhibition, with two such links, or three, providing positive and negative feedback, respectively. Operative co-activation of direct and indirect SPNs is, therefore, pictured to simultaneously promote action, and to restrain it. The balance of this rival activity is determined by the contextual inputs, which summarise the external and internal sensory environment, and the state of ongoing behavioural priorities. Notably, the distributed sources of contextual convergence upon a striatal locus mirror the transcortical network harnessed by the origin of the operative input to that locus, thereby capturing a similar set of contingencies relevant to determining action. The disclosed loop formulation of corticostriatal and subsequent BG loop circuitry, as advanced here, refines the operating rationale of the classic model and allows the integration of more recent anatomical and physiological data, some of which can appear at variance with the classic model. Equally, it provides a lucid functional context for continuing cellular studies of SPN biophysics and mechanisms of synaptic plasticity.",
"title": ""
},
{
"docid": "330704fbad279c826eb7cf3a174b78a3",
"text": "The problem of planning and goal-directed behavior has been addressed in computer science for many years, typically based on classical concepts like Bellman’s optimality principle, dynamic programming, or Reinforcement Learning methods – but is this the only way to address the problem? Recently there is growing interest in using probabilistic inference methods for decision making and planning. Promising about such approaches is that they naturally extend to distributed state representations and efficiently cope with uncertainty. In sensor processing, inference methods typically compute a posterior over state conditioned on observations – applied in the context of action selection they compute a posterior over actions conditioned on goals. In this paper we will first introduce the idea of using inference for reasoning about actions on an intuitive level, drawing connections to the idea of internal simulation. We then survey previous and own work using the new approach to address (partially observable) Markov Decision Processes and stochastic optimal control problems.",
"title": ""
},
{
"docid": "2fa61482be37fd956e6eceb8e517411d",
"text": "According to analysis reports on road accidents of recent years, it's renowned that the main cause of road accidents resulting in deaths, severe injuries and monetary losses, is due to a drowsy or a sleepy driver. Drowsy state may be caused by lack of sleep, medication, drugs or driving continuously for long time period. An increase rate of roadside accidents caused due to drowsiness during driving indicates a need of a system that detects such state of a driver and alerts him prior to the occurrence of any accident. During the recent years, many researchers have shown interest in drowsiness detection. Their approaches basically monitor either physiological or behavioral characteristics related to the driver or the measures related to the vehicle being used. A literature survey summarizing some of the recent techniques proposed in this area is provided. To deal with this problem we propose an eye blink monitoring algorithm that uses eye feature points to determine the open or closed state of the eye and activate an alarm if the driver is drowsy. Detailed experimental findings are also presented to highlight the strengths and weaknesses of our technique. An accuracy of 94% has been recorded for the proposed methodology.",
"title": ""
},
{
"docid": "3e4d937d38a61a94bb8647d3f7b02802",
"text": "Most classification algorithms deal with datasets which have a set of input features, the variables to be used as predictors, and only one output class, the variable to be predicted. However, in late years many scenarios in which the classifier has to work with several outputs have come to life. Automatic labeling of text documents, image annotation or protein classification are among them. Multilabel datasets are the product of these new needs, and they have many specific traits. The mldr package allows the user to load datasets of this kind, obtain their characteristics, produce specialized plots, and manipulate them. The goal is to provide the exploratory tools needed to analyze multilabel datasets, as well as the transformation and manipulation functions that will make possible to apply binary and multiclass classification models to this data or the development of new multilabel classifiers. Thanks to its integrated user interface, the exploratory functions will be available even to non-specialized R users.",
"title": ""
},
{
"docid": "83c0b27f08494806481468fa4704d679",
"text": "A RFID system with a chipless RFID tag on a 90-µm thin Taconic TF290 laminate is presented. The chipless tag encodes data into the spectral signature in both magnitude and phase of the spectrum. The design and operation of a prototype RFID reader is also presented. The RFID reader operates between 5 – 10.7 GHz frequency band and successfully detects a chipless tag at 15 cm range. The tag design can be transferred easily to plastic and paper, making it suitable for mass deployment for low cost items and has the potential to replace trillions of barcodes printed each year. The RFID reader is suitable for mounting over conveyor belt systems.",
"title": ""
},
{
"docid": "af2779ab87ff707d51e735977a4fa0e2",
"text": "The increasing availability of large motion databases, in addition to advancements in motion synthesis, has made motion indexing and classification essential for better motion composition. However, in order to achieve good connectivity in motion graphs, it is important to understand human behaviour; human movement though is complex and difficult to completely describe. In this paper, we investigate the similarities between various emotional states with regards to the arousal and valence of the Russell’s circumplex model. We use a variety of features that encode, in addition to the raw geometry, stylistic characteristics of motion based on Laban Movement Analysis (LMA). Motion capture data from acted dance performances were used for training and classification purposes. The experimental results show that the proposed features can partially extract the LMA components, providing a representative space for indexing and classification of dance movements with regards to the emotion. This work contributes to the understanding of human behaviour and actions, providing insights on how people express emotional states using their body, while the proposed features can be used as complement to the standard motion similarity, synthesis and classification methods.",
"title": ""
},
{
"docid": "5cfc4911a59193061ab55c2ce5013272",
"text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"title": ""
},
{
"docid": "255ff39001f9bbcd7b1e6fe96f588371",
"text": "We derive inner and outer bounds on the capacity region for a class of three-user partially connected interference channels. We focus on the impact of topology, interference alignment, and interplay between interference and noise. The representative channels we consider are the ones that have clear interference alignment gain. For these channels, Z-channel type outer bounds are tight to within a constant gap from capacity. We present near-optimal achievable schemes based on rate-splitting, lattice alignment, and successive decoding.",
"title": ""
},
{
"docid": "b992e02ee3366d048bbb4c30a2bf822c",
"text": "Structured graphics models such as Scalable Vector Graphics (SVG) enable designers to create visually rich graphics for user interfaces. Unfortunately current programming tools make it difficult to implement advanced interaction techniques for these interfaces. This paper presents the Hierarchical State Machine Toolkit (HsmTk), a toolkit targeting the development of rich interactions. The key aspect of the toolkit is to consider interactions as first-class objects and to specify them with hierarchical state machines. This approach makes the resulting behaviors self-contained, easy to reuse and easy to modify. Interactions can be attached to graphical elements without knowing their detailed structure, supporting the parallel refinement of the graphics and the interaction.",
"title": ""
},
{
"docid": "a8aa7af1b9416d4bd6df9d4e8bcb8a40",
"text": "User-computer dialogues are typically one-sided, with the bandwidth from computer to user far greater than that from user to computer. The movement of a user’s eyes can provide a convenient, natural, and high-bandwidth source of additional user input, to help redress this imbalance. We therefore investigate the introduction of eye movements as a computer input medium. Our emphasis is on the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way. This chapter describes research at NRL on developing such interaction techniques and the broader issues raised by non-command-based interaction styles. It discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, reports our experiences and observations on them, and considers eye movement-based interaction as an exemplar of a new, more general class of non-command-based user-computer interaction.",
"title": ""
}
] |
scidocsrr
|
d481b29bacd75dfaeaa95fc807645f4f
|
DOES HUMAN FACIAL ATTRACTIVENESS HONESTLY ADVERTISE HEALTH ? Longitudinal Data on an Evolutionary Question
|
[
{
"docid": "6210a0a93b97a12c2062ac78953f3bd1",
"text": "This article proposes a contextual-evolutionary theory of human mating strategies. Both men and women are hypothesized to have evolved distinct psychological mechanisms that underlie short-term and long-term strategies. Men and women confront different adaptive problems in short-term as opposed to long-term mating contexts. Consequently, different mate preferences become activated from their strategic repertoires. Nine key hypotheses and 22 predictions from Sexual Strategies Theory are outlined and tested empirically. Adaptive problems sensitive to context include sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment. Discussion summarizes 6 additional sources of behavioral data, outlines adaptive problems common to both sexes, and suggests additional contexts likely to cause shifts in mating strategy.",
"title": ""
}
] |
[
{
"docid": "4599529680781f9d3d19f766e51a7734",
"text": "Existing support vector regression (SVR) based image superresolution (SR) methods always utilize single layer SVR model to reconstruct source image, which are incapable of restoring the details and reduce the reconstruction quality. In this paper, we present a novel image SR approach, where a multi-layer SVR model is adopted to describe the relationship between the low resolution (LR) image patches and the corresponding high resolution (HR) ones. Besides, considering the diverse content in the image, we introduce pixel-wise classification to divide pixels into different classes, such as horizontal edges, vertical edges and smooth areas, which is more conductive to highlight the local characteristics of the image. Moreover, the input elements to each SVR model are weighted respectively according to their corresponding output pixel's space positions in the HR image. Experimental results show that, compared with several other learning-based SR algorithms, our method gains high-quality performance.",
"title": ""
},
{
"docid": "f12c53ede3ef1cbab2641970aacbe16f",
"text": "Considerable advances have been achieved in estimating the depth map from a single image via convolutional neural networks (CNNs) during the past few years. Combining depth prediction from CNNs with conventional monocular simultaneous localization and mapping (SLAM) is promising for accurate and dense monocular reconstruction, in particular addressing the two long-standing challenges in conventional monocular SLAM: low map completeness and scale ambiguity. However, depth estimated by pretrained CNNs usually fails to achieve sufficient accuracy for environments of different types from the training data, which are common for certain applications such as obstacle avoidance of drones in unknown scenes. Additionally, inaccurate depth prediction of CNN could yield large tracking errors in monocular SLAM. In this paper, we present a real-time dense monocular SLAM system, which effectively fuses direct monocular SLAM with an online-adapted depth prediction network for achieving accurate depth prediction of scenes of different types from the training data and providing absolute scale information for tracking and mapping. Specifically, on one hand, tracking pose (i.e., translation and rotation) from direct SLAM is used for selecting a small set of highly effective and reliable training images, which acts as ground truth for tuning the depth prediction network on-the-fly toward better generalization ability for scenes of different types. A stage-wise Stochastic Gradient Descent algorithm with a selective update strategy is introduced for efficient convergence of the tuning process. On the other hand, the dense map produced by the adapted network is applied to address scale ambiguity of direct monocular SLAM which in turn improves the accuracy of both tracking and overall reconstruction. The system with assistance of both CPUs and GPUs, can achieve real-time performance with progressively improved reconstruction accuracy. Experimental results on public datasets and live application to obstacle avoidance of drones demonstrate that our method outperforms the state-of-the-art methods with greater map completeness and accuracy, and a smaller tracking error.",
"title": ""
},
{
"docid": "14c981a63e34157bb163d4586502a059",
"text": "In this paper, we investigate an angle of arrival (AoA) and angle of departure (AoD) estimation algorithm for sparse millimeter wave multiple-input multiple-output (MIMO) channels. The analytical channel model whose use we advocate here is the beam space (or virtual) MIMO channel representation. By leveraging the beam space MIMO concept, we characterize probabilistic channel priors under an analog precoding and combining constraints. This investigation motivates Bayesian inference approaches to virtual AoA and AoD estimation. We divide the estimation task into downlink sounding for AoA estimation and uplink sounding for AoD estimation. A belief propagation (BP)-type algorithm is adopted, leading to computationally efficient approximate message passing (AMP) and approximate log-likelihood ratio testing (ALLRT) algorithms. Numerical results demonstrate that the proposed algorithm outperforms the conventional AMP in terms of the AoA and AoD estimation accuracy for the sparse millimeter wave MIMO channel.",
"title": ""
},
{
"docid": "5aeffba75c1e6d5f0e7bde54662da8e8",
"text": "A large number of Open Relation Extraction approaches have been proposed recently, covering a wide range of NLP machinery, from “shallow” (e.g., part-of-speech tagging) to “deep” (e.g., semantic role labeling–SRL). A natural question then is what is the tradeoff between NLP depth (and associated computational cost) versus effectiveness. This paper presents a fair and objective experimental comparison of 8 state-of-the-art approaches over 5 different datasets, and sheds some light on the issue. The paper also describes a novel method, EXEMPLAR, which adapts ideas from SRL to less costly NLP machinery, resulting in substantial gains both in efficiency and effectiveness, over binary and n-ary relation extraction tasks.",
"title": ""
},
{
"docid": "6256a71f6c852d4be82f029e785b9d1f",
"text": "Recently proposed robust 3D face alignment methods establish either dense or sparse correspondence between a 3D face model and a 2D facial image. The use of these methods presents new challenges as well as opportunities for facial texture analysis. In particular, by sampling the image using the fitted model, a facial UV can be created. Unfortunately, due to self-occlusion, such a UV map is always incomplete. In this paper, we propose a framework for training Deep Convolutional Neural Network (DCNN) to complete the facial UV map extracted from in-the-wild images. To this end, we first gather complete UV maps by fitting a 3D Morphable Model (3DMM) to various multiview image and video datasets, as well as leveraging on a new 3D dataset with over 3,000 identities. Second, we devise a meticulously designed architecture that combines local and global adversarial DCNNs to learn an identity-preserving facial UV completion model. We demonstrate that by attaching the completed UV to the fitted mesh and generating instances of arbitrary poses, we can increase pose variations for training deep face recognition/verification models, and minimise pose discrepancy during testing, which lead to better performance. Experiments on both controlled and in-the-wild UV datasets prove the effectiveness of our adversarial UV completion model. We achieve state-of-the-art verification accuracy, 94.05%, under the CFP frontal-profile protocol only by combining pose augmentation during training and pose discrepancy reduction during testing. We will release the first in-the-wild UV dataset (we refer as WildUV) that comprises of complete facial UV maps from 1,892 identities for research purposes.",
"title": ""
},
{
"docid": "c80222e5a7dfe420d16e10b45f8fab66",
"text": "Matching people across non-overlapping camera views, known as person re-identification, is challenging due to the lack of spatial and temporal constraints and large visual appearance changes caused by variations in view angle, lighting, background clutter and occlusion. To address these challenges, most previous approaches aim to extract visual features that are both distinctive and stable under appearance changes. However, most visual features and their combinations under realistic conditions are neither stable nor distinctive thus should not be used indiscriminately. In this paper, we propose to formulate person re-identification as a distance learning problem, which aims to learn the optimal distance that can maximises matching accuracy regardless the choice of representation. To that end, we introduce a novel Probabilistic Relative Distance Comparison (PRDC) model, which differs from most existing distance learning methods in that, rather than minimising intra-class variation whilst maximising intra-class variation, it aims to maximise the probability of a pair of true match having a smaller distance than that of a wrong match pair. This makes our model more tolerant to appearance changes and less susceptible to model over-fitting. Extensive experiments are carried out to demonstrate that 1) by formulating the person re-identification problem as a distance learning problem, notable improvement on matching accuracy can be obtained against conventional person re-identification techniques, which is particularly significant when the training sample size is small; and 2) our PRDC outperforms not only existing distance learning methods but also alternative learning methods based on boosting and learning to rank.",
"title": ""
},
{
"docid": "35e662f6c1d75e6878a78c4c443b9448",
"text": "ÐThis paper introduces a refined general definition of a skeleton that is based on a penalized-distance function and cannot create any of the degenerate cases of the earlier CEASAR and TEASAR algorithms. Additionally, we provide an algorithm that finds the skeleton accurately and rapidly. Our solution is fully automatic, which frees the user from having to engage in manual data preprocessing. We present the accurate skeletons computed on a number of test datasets. The algorithm is very efficient as demonstrated by the running times which were all below seven minutes. Index TermsÐSkeleton, centerline, medial axis, automatic preprocessing, modeling.",
"title": ""
},
{
"docid": "21321c82a296da3c8c1f0637e3bfc3e7",
"text": "We present a discrete distance transform in style of the vector propagation algorithm by Danielsson. Like other vector propagation algorithms, the proposed method is close to exact, i.e., the error can be strictly bounded from above and is significantly smaller than one pixel. Our contribution is that the algorithm runs entirely on consumer class graphics hardware, thereby achieving a throughput of up to 96 Mpixels/s. This allows the proposed method to be used in a wide range of applications that rely both on high speed and high quality.",
"title": ""
},
{
"docid": "5bf90680117b7db4315cce18bc9aefa2",
"text": "Motivated by aiding human operators in the detection of dangerous objects in passenger luggage, such as in airports, we develop an automatic object detection approach for multi-view X-ray image data. We make three main contributions: First, we systematically analyze the appearance variations of objects in X-ray images from inspection systems. We then address these variations by adapting standard appearance-based object detection approaches to the specifics of dual-energy X-ray data and the inspection scenario itself. To that end we reduce projection distortions, extend the feature representation, and address both in-plane and out-of-plane object rotations, which are a key challenge compared to many detection tasks in photographic images. Finally, we propose a novel multi-view (multi-camera) detection approach that combines single-view detections from multiple views and takes advantage of the mutual reinforcement of geometrically consistent hypotheses. While our multi-view approach can be used atop arbitrary single-view detectors, thus also for multi-camera detection in photographic images, we evaluate our method on detecting handguns in carry-on luggage. Our results show significant performance gains from all components.",
"title": ""
},
{
"docid": "6042dab731ca69452d22eaa319365c77",
"text": "An overview is presented of the current state-of-theart in silicon nanophotonic ring resonators. Basic theory of ring resonators is discussed, and applied to the peculiarities of submicron silicon photonic wire waveguides: the small dimensions and tight bend radii, sensitivity to perturbations and the boundary conditions of the fabrication processes. Theory is compared to quantitative measurements. Finally, several of the more promising applications of silicon ring resonators are discussed: filters and optical delay lines, label-free biosensors, and active rings for efficient modulators and even light sources. Silicon microring resonators Wim Bogaerts*, Peter De Heyn, Thomas Van Vaerenbergh, Katrien De Vos, Shankar Kumar Selvaraja, Tom Claes, Pieter Dumon, Peter Bienstman, Dries Van Thourhout, and Roel Baets",
"title": ""
},
{
"docid": "a1221c2ae735a971047018911b5567e5",
"text": "Market integration allows increasing the social welfare of a given society. In most markets, integration also raises the social welfare of the participating markets (partakers). However, electricity markets have complexities such as transmission network congestion and requirements of power reserve that could lead to a decrease in the social welfare of some partakers. The social welfare reduction of partakers, if it occurs, would surely be a hindrance to the development of regional markets, since participants are usually national systems. This paper shows a new model for the regional dispatch of energy and reserve, and proposes as constraints that the social welfare of partakers does not decrease with respect to that obtained from the isolated optimal operation. These social welfare constraints are characterized by their stochastic nature and their dependence on the energy price of different operating states. The problem is solved by the combination of two optimization models (hybrid optimization): A linear model embedded within a meta-heuristic algorithm, which is known as the swarm version of the Means Variance Mapping Optimization (MVMOS). MVMOS allows incorporating the stochastic nature of social welfare constraints through a dynamic penalty scheme, which considers the fulfillment degree along with the dynamics of the search process. & 2016 Published by Elsevier B.V. 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88",
"title": ""
},
{
"docid": "f93ee5c9de994fa07e7c3c1fe6e336d1",
"text": "Sleep bruxism (SB) is characterized by repetitive and coordinated mandible movements and non-functional teeth contacts during sleep time. Although the etiology of SB is controversial, the literature converges on its multifactorial origin. Occlusal factors, smoking, alcoholism, drug usage, stress, and anxiety have been described as SB trigger factors. Recent studies on this topic discussed the role of neurotransmitters on the development of SB. Thus, the purpose of this study was to detect and quantify the urinary levels of catecholamines, specifically of adrenaline, noradrenaline and dopamine, in subjects with SB and in control individuals. Urine from individuals with SB (n = 20) and without SB (n = 20) was subjected to liquid chromatography. The catecholamine data were compared by Mann–Whitney’s test (p ≤ 0.05). Our analysis showed higher levels of catecholamines in subjects with SB (adrenaline = 111.4 µg/24 h; noradrenaline = 261,5 µg/24 h; dopamine = 479.5 µg/24 h) than in control subjects (adrenaline = 35,0 µg/24 h; noradrenaline = 148,7 µg/24 h; dopamine = 201,7 µg/24 h). Statistical differences were found for the three catecholamines tested. It was concluded that individuals with SB have higher levels of urinary catecholamines.",
"title": ""
},
{
"docid": "8cecac2a619701d7a7a16d706beadc0a",
"text": "Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine learning is used to make important decisions about the welfare (employment, education, health) of strategic individuals. Knowing information about the classifier, such individuals may manipulate their attributes in order to obtain a better classification outcome. As a result of this behavior -- often referred to as gaming -- the performance of the classifier may deteriorate sharply. Indeed, gaming is a well-known obstacle for using machine learning methods in practice; in financial policy-making, the problem is widely known as Goodhart's law. In this paper, we formalize the problem, and pursue algorithms for learning classifiers that are robust to gaming.\n We model classification as a sequential game between a player named \"Jury\" and a player named \"Contestant.\" Jury designs a classifier, and Contestant receives an input to the classifier drawn from a distribution. Before being classified, Contestant may change his input based on Jury's classifier. However, Contestant incurs a cost for these changes according to a cost function. Jury's goal is to achieve high classification accuracy with respect to Contestant's original input and some underlying target classification function, assuming Contestant plays best response. Contestant's goal is to achieve a favorable classification outcome while taking into account the cost of achieving it.\n For a natural class of \"separable\" cost functions, and certain generalizations, we obtain computationally efficient learning algorithms which are near optimal, achieving a classification error that is arbitrarily close to the theoretical minimum. Surprisingly, our algorithms are efficient even on concept classes that are computationally hard to learn. For general cost functions, designing an approximately optimal strategy-proof classifier, for inverse-polynomial approximation, is NP-hard.",
"title": ""
},
{
"docid": "64a3877186106c911891f4f6fe7fbede",
"text": "In this paper, we present a multimodal emotion recognition framework called EmotionMeter that combines brain waves and eye movements. To increase the feasibility and wearability of EmotionMeter in real-world applications, we design a six-electrode placement above the ears to collect electroencephalography (EEG) signals. We combine EEG and eye movements for integrating the internal cognitive states and external subconscious behaviors of users to improve the recognition accuracy of EmotionMeter. The experimental results demonstrate that modality fusion with multimodal deep neural networks can significantly enhance the performance compared with a single modality, and the best mean accuracy of 85.11% is achieved for four emotions (happy, sad, fear, and neutral). We explore the complementary characteristics of EEG and eye movements for their representational capacities and identify that EEG has the advantage of classifying happy emotion, whereas eye movements outperform EEG in recognizing fear emotion. To investigate the stability of EmotionMeter over time, each subject performs the experiments three times on different days. EmotionMeter obtains a mean recognition accuracy of 72.39% across sessions with the six-electrode EEG and eye movement features. These experimental results demonstrate the effectiveness of EmotionMeter within and between sessions.",
"title": ""
},
{
"docid": "3aaf13c82f525299b7b4e93d316bfd18",
"text": "Recently, many graph based hashing methods have been emerged to tackle large-scale problems. However, there exists two major bottlenecks: (1) directly learning discrete hashing codes is an NP-hard optimization problem; (2) the complexity of both storage and computational time to build a graph with n data points is O(n2). To address these two problems, in this paper, we propose a novel yet simple supervised graph based hashing method, asymmetric discrete graph hashing, by preserving the asymmetric discrete constraint and building an asymmetric affinity matrix to learn compact binary codes. Specifically, we utilize two different instead of identical discrete matrices to better preserve the similarity of the graph with short binary codes.We generate the asymmetric affinity matrix using m (m << n) selected anchors to approximate the similarity among all training data so that computational time and storage requirement can be significantly improved. In addition, the proposed method jointly learns discrete binary codes and a low-dimensional projection matrix to further improve the retrieval accuracy. Extensive experiments on three benchmark large-scale databases demonstrate its superior performance over the recent state of the arts with lower training time costs.",
"title": ""
},
{
"docid": "e8f86dad01a7e3bd25bdabdc7a3d7136",
"text": "In this paper, a wideband monopole antenna with high gain characteristics has been proposed. Number of slits was introduced at the far radiating edge to transform it to multiple monopole radiators. Partial ground plane has been used to widen the bandwidth while by inserting suitable slits at the radiating edges return loss and bandwidth has been improved. The proposed antenna provides high gain up to 13.2dB and the achieved impedance bandwidth is wider than an earlier reported design. FR4 Epoxy with dielectric constant 4.4 and loss tangent 0.02 has been used as substrate material. Antenna has been simulated using HFSS (High Frequency Structure Simulator) as a 3D electromagnetic field simulator, based on finite element method. A good settlement has been found between simulated and measured results. The proposed design is suitable for GSM (890-960MHz), GPS (L1:1575.42MHz, L2:1227.60MHz, L3:1381.05MHz, L4:1379.913MHz, L5:1176.45MHz), DCS (1710-1880MHz), PCS (1850-1990MHz), UMTS(1920-2170MHz), Wi-Fi/WLAN/Hiper LAN/IEEE 802.11 2.4GHz (2412-2484MHz), 3.6GHz (3657.5-3690.0MHz) and 4.9/5.0GHz (4915-5825MHz), Bluetooth (2400-2484MHz), WiMAX 2.3GHz (2.3-2.5GHz), 2.5GHz (2500-2690 MHz), 3.3GHz, 3.5GHz (3400-3600MHz) and 5.8GHz (5.6-5.9GHz) & LTE applications.",
"title": ""
},
{
"docid": "8904494e20d6761437e4d63c86c43e78",
"text": "Deep residual networks (ResNets) and their variants are widely used in many computer vision applications and natural language processing tasks. However, the theoretical principles for designing and training ResNets are still not fully understood. Recently, several points of view have emerged to try to interpret ResNet theoretically, such as unraveled view, unrolled iterative estimation and dynamical systems view. In this paper, we adopt the dynamical systems point of view, and analyze the lesioning properties of ResNet both theoretically and experimentally. Based on these analyses, we additionally propose a novel method for accelerating ResNet training. We apply the proposed method to train ResNets and Wide ResNets for three image classification benchmarks, reducing training time by more than 40% with superior or on-par accuracy.",
"title": ""
},
{
"docid": "ccbd40976208fcb7a61d67674d1115af",
"text": "Requirements Management (RM) is about organising the requirements and additional information gathered during the Requirements Engineering (RE) process, and managing changes of these requirements. Practioners as well as researchers acknowledge that RM is both important and difficult, and that changing requirements is a challenging factor in many development projects. But why, then, is so little research done within RM? This position paper identifies and discusses five research areas where further research within RM is needed.",
"title": ""
},
{
"docid": "8b2f4d597b1aa5a9579fa3e37f6acc65",
"text": "This work presents a 910MHz/2.4GHz dual-band dipole antenna for Power Harvesting and/or Sensor Network applications whose main advantage lies on its easily tunable bands. Tunability is achieved via the low and high frequency dipole separation Wgap. This separation is used to increase or decrease the S11 magnitude of the required bands. Such tunability can be used to harvest energy in environments where the electric field strength of one carrier band is dominant over the other one, or in the case when both carriers have similar electric field strength. If the environment is crowed by 820MHz-1.02GHz carries Wgap is adjusted to 1mm in order to harvest/sense only the selected band; if the environment is full of 2.24GHz - 2.52 GHz carriers Wgap is set to 7mm. When Wgap is selected to 4mm both bands can be harvested/sensed. The proposed antenna works for UHF-RFID, GSM-840MHz, 3G-UMTS, Wi-Fi and Bluetooth standards. Simulations are carried out in Advanced Design System (ADS) Momentum using commercial FR4 printed circuit board specification.",
"title": ""
}
] |
scidocsrr
|
b152fd45a91b75082af133742b0f89bc
|
AND/OR Multi-Valued Decision Diagrams (AOMDDs) for Graphical Models
|
[
{
"docid": "e1ada58b1ae0e92f12d4fb049de5a4bb",
"text": "We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. We then provide a knowledge compilation map, which analyzes a large number of existing target compilation languages according to their succinctness and their polytime transformations and queries. We argue that such analysis is necessary for placing new compilation approaches within the context of existing ones. We also go beyond classical, flat target compilation languages based on CNF and DNF, and consider a richer, nested class based on directed acyclic graphs (such as OBDDs), which we show to include a relatively large number of target compilation languages.",
"title": ""
}
] |
[
{
"docid": "b382f93bb45e7324afaff9950d814cf3",
"text": "OBJECTIVE\nA vocational rehabilitation program (occupational therapy and supported employment) for promoting the return to the community of long-stay persons with schizophrenia was established at a psychiatric hospital in Japan. The purpose of the study was to evaluate the program in terms of hospitalization rates, community tenure, and social functioning with each individual serving as his or her control.\n\n\nMETHODS\nFifty-two participants, averaging 8.9 years of hospitalization, participated in the vocational rehabilitation program consisting of 2 to 6 hours of in-hospital occupational therapy for 6 days per week and a post-discharge supported employment component. Seventeen years after the program was established, a retrospective study was conducted to evaluate the impact of the program on hospitalizations, community tenure, and social functioning after participants' discharge from hospital, using an interrupted time-series analysis. The postdischarge period was compared with the period from onset of illness to the index discharge on the three outcome variables.\n\n\nRESULTS\nAfter discharge from the hospital, the length of time spent by participants out of the hospital increased, social functioning improved, and risk of hospitalization diminished by 50%. Female participants and those with supportive families spent more time out of the hospital than participants who were male or came from nonsupportive families.\n\n\nCONCLUSION\nA combined program of occupational therapy and supported employment was successful in a Japanese psychiatric hospital when implemented with the continuing involvement of a clinical team. Interventions that improve the emotional and housing supports provided to persons with schizophrenia by their families are likely to enhance the outcome of vocational services.",
"title": ""
},
{
"docid": "0780f9240aaaa6b45cf4edf1d0de15ec",
"text": "Adaptive Case Management (ACM) is a new paradigm that facilitates the coordination of knowledge work through case handling. Current ACM systems, however, lack support of providing sophisticated user guidance for next step recommendations and predictions about the case future. In recent years, process mining research developed approaches to make recommendations and predictions based on event logs readily available in process-aware information systems. This paper builds upon those approaches and integrates them into an existing ACM solution. The research goal is to design and develop a prototype that gives next step recommendations and predictions based on process mining techniques in ACM systems. The models proposed, recommend actions that shorten the case running time, mitigate deadline transgressions, support case goals and have been used in former cases with similar properties. They further give case predictions about the remaining time, possible deadline violations, and whether the current case path supports given case goals. A final evaluation proves that the prototype is indeed capable of making proper recommendations and predictions. In addition, starting points for further improvement are discussed.",
"title": ""
},
{
"docid": "8503b51197d8242c4ec242f7190c2405",
"text": "We provide a state-of-the-art explication of application security and software protection. The relationship between application security and data security, network security, and software security is discussed. Three simplified threat models for software are sketched. To better understand what attacks must be defended against in order to improve software security, we survey software attack approaches and attack tools. A simplified software security view of a software application is given, and along with illustrative examples, used to motivate a partial list of software security requirements for applications.",
"title": ""
},
{
"docid": "fafbcccd49d324ea45dbe4c341d4c7d9",
"text": "This paper discusses the technical issues that were required to adapt a KUKA Robocoaster for use as a real-time motion simulator. Within this context, the paper addresses the physical modifications and the software control structure that were needed to have a flexible and safe experimental setup. It also addresses the delays and transfer function of the system. The paper is divided into two sections. The first section describes the control and safety structures of the MPI Motion Simulator. The second section shows measurements of latencies and frequency responses of the motion simulator. The results show that the frequency responses of the MPI Motion Simulator compare favorably with high-end Stewart Platforms, and therefore demonstrate the suitability of robot-based motion simulators for flight simulation.",
"title": ""
},
{
"docid": "05e6bc54f6175e1f9bb296500bc3d9e7",
"text": "This article describes XRel, a novel approach for storage and retrieval of XML documents using relational databases. In this approach, an XML document is decomposed into nodes on the basis of its tree structure and stored in relational tables according to the node type, with path information from the root to each node. XRel enables us to store XML documents using a fixed relational schema without any information about DTDs and also to utilize indices such as the B+-tree and the R-tree supported by database management systems. Thus, XRel does not need any extension of relational databases for storing XML documents. For processing XML queries, we present an algorithm for translating a core subset of XPath expressions into SQL queries. Finally, we demonstrate the effectiveness of this approach through several experiments using actual XML documents.",
"title": ""
},
{
"docid": "78a8eb1c05d8af52ca32ba29b3fcf89b",
"text": "Pediatric firearm-related deaths and injuries are a national public health crisis. In this Special Review Article, we characterize the epidemiology of firearm-related injuries in the United States and discuss public health programs, the role of pediatricians, and legislative efforts to address this health crisis. Firearm-related injuries are leading causes of unintentional injury deaths in children and adolescents. Children are more likely to be victims of unintentional injuries, the majority of which occur in the home, and adolescents are more likely to suffer from intentional injuries due to either assault or suicide attempts. Guns are present in 18% to 64% of US households, with significant variability by geographic region. Almost 40% of parents erroneously believe their children are unaware of the storage location of household guns, and 22% of parents wrongly believe that their children have never handled household guns. Public health interventions to increase firearm safety have demonstrated varying results, but the most effective programs have provided free gun safety devices to families. Pediatricians should continue working to reduce gun violence by asking patients and their families about firearm access, encouraging safe storage, and supporting firearm-related injury prevention research. Pediatricians should also play a role in educating trainees about gun violence. From a legislative perspective, universal background checks have been shown to decrease firearm homicides across all ages, and child safety laws have been shown to decrease unintentional firearm deaths and suicide deaths in youth. A collective, data-driven public health approach is crucial to halt the epidemic of pediatric firearm-related injury.",
"title": ""
},
{
"docid": "6954c2a51c589987ba7e37bd81289ba1",
"text": "TYAs paper looks at some of the algorithms that can be used for effective detection and tracking of vehicles, in particular for statistical analysis. The main methods for tracking discussed and implemented are blob analysis, optical flow and foreground detection. A further analysis is also done testing two of the techniques using a number of video sequences that include different levels of difficulties.",
"title": ""
},
{
"docid": "f838806a316b4267e166e7215db12166",
"text": "This paper presents a computationally efficient method for action recognition from depth video sequences. It employs the so called depth motion maps (DMMs) from three projection views (front, side and top) to capture motion cues and uses local binary patterns (LBPs) to gain a compact feature representation. Two types of fusion consisting of feature-level fusion and decision-level fusion are considered. In the feature-level fusion, LBP features from three DMMs are merged before classification while in the decision-level fusion, a soft decision-fusion rule is used to combine the classification outcomes. The introduced method is evaluated on two standard datasets and is also compared with the existing methods. The results indicate that it outperforms the existing methods and is able to process depth video sequences in real-time.",
"title": ""
},
{
"docid": "28899946726bc1e665298f09ea9e654d",
"text": "This paper presents a simple and robust mechanism, called change-point monitoring (CPM), to detect denial of service (DoS) attacks. The core of CPM is based on the inherent network protocol behavior and is an instance of the sequential change point detection. To make the detection mechanism insensitive to sites and traffic patterns, a nonparametric cumulative sum (CUSUM) method is applied, thus making the detection mechanism robust, more generally applicable, and its deployment much easier. CPM does not require per-flow state information and only introduces a few variables to record the protocol behaviors. The statelessness and low computation overhead of CPM make itself immune to any flooding attacks. As a case study, the efficacy of CPM is evaluated by detecting a SYN flooding attack - the most common DoS attack. The evaluation results show that CPM has short detection latency and high detection accuracy",
"title": ""
},
{
"docid": "5c8570045e83b72643f1ac99018351ea",
"text": "OBJECTIVES\nAlthough anxiety exists concerning the perceived risk of transmission of bloodborne viruses after community-acquired needlestick injuries, seroconversion seems to be rare. The objectives of this study were to describe the epidemiology of pediatric community-acquired needlestick injuries and to estimate the risk of seroconversion for HIV, hepatitis B virus, and hepatitis C virus in these events.\n\n\nMETHODS\nThe study population included all of the children presenting with community-acquired needlestick injuries to the Montreal Children's Hospital between 1988 and 2006 and to Hôpital Sainte-Justine between 1995 and 2006. Data were collected prospectively at Hôpital Sainte-Justine from 2001 to 2006. All of the other data were reviewed retrospectively by using a standardized case report form.\n\n\nRESULTS\nA total of 274 patients were identified over a period of 19 years. Mean age was 7.9 +/- 3.4 years. A total of 176 (64.2%) were boys. Most injuries occurred in streets (29.2%) or parks (24.1%), and 64.6% of children purposely picked up the needle. Only 36 patients (13.1%) noted blood on the device. Among the 230 patients not known to be immune for hepatitis B virus, 189 (82.2%) received hepatitis B immunoglobulin, and 213 (92.6%) received hepatitis B virus vaccine. Prophylactic antiretroviral therapy was offered beginning in 1997. Of the 210 patients who presented thereafter, 82 (39.0%) received chemoprophylaxis, of whom 69 (84.1%) completed a 4-week course of therapy. The use of a protease inhibitor was not associated with a significantly higher risk of adverse effects or early discontinuation of therapy. At 6 months, 189 were tested for HIV, 167 for hepatitis B virus, and 159 for hepatitis C virus. There were no seroconversions.\n\n\nCONCLUSIONS\nWe observed no seroconversions in 274 pediatric community-acquired needlestick injuries, thereby confirming that the risk of transmission of bloodborne viruses in these events is very low.",
"title": ""
},
{
"docid": "e77dc44a5b42d513bdbf4972d62a74f9",
"text": "Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.",
"title": ""
},
{
"docid": "1b802879e554140e677020e379b866c1",
"text": "This study investigated vertical versus shared leadership as predictors of the effectiveness of 71 change management teams. Vertical leadership stems from an appointed or formal leader of a team, whereas shared leadership (C. L. Pearce, 1997; C. L. Pearce & J. A. Conger, in press; C. L. Pearce & H. P. Sims, 2000) is a group process in which leadership is distributed among, and stems from, team members. Team effectiveness was measured approximately 6 months after the assessment of leadership and was also measured from the viewpoints of managers, internal customers, and team members. Using multiple regression, the authors found both vertical and shared leadership to be significantly related to team effectiveness ( p .05), although shared leadership appears to be a more useful predictor of team effectiveness than vertical leadership.",
"title": ""
},
{
"docid": "5db19f15ec148746613bdb48a4ca746a",
"text": "Wireless power transfer (WPT) system is a practical and promising way for charging electric vehicles due to its security, convenience, and reliability. The requirement for high-power wireless charging is on the rise, but implementing such a WPT system has been a challenge because of the constraints of the power semiconductors and the installation space limitation at the bottom of the vehicle. In this paper, bipolar coils and unipolar coils are integrated into the transmitting side and the receiving side to make the magnetic coupler more compact while delivering high power. The same-side coils are naturally decoupled; therefore, there is no magnetic coupling between the same-side coils. The circuit model of the proposed WPT system using double-sided LCC compensations is presented. Finite-element analysis tool ANSYS MAXWELL is adopted to simulate and design the magnetic coupler. Finally, an experimental setup is constructed to evaluate the proposed WPT system. The proposed WPT system achieved the dc–dc efficiency at 94.07% while delivering 4.73 kW to the load with a vertical air gap of 150 mm.",
"title": ""
},
{
"docid": "c96e8afc0c3e0428a257ba044cd2a35a",
"text": "The tumor necrosis factor ligand superfamily member receptor activator of nuclear factor-kB (NF-kB) ligand (RANKL), its cellular receptor, receptor activator of NF-kB (RANK), and the decoy receptor, osteoprotegerin (OPG) represent a novel cytokine triad with pleiotropic effects on bone metabolism, the immune system, and endocrine functions (1). RANKL is produced by osteoblastic lineage cells and activated T lymphocytes (2– 4) and stimulates its receptor, RANK, which is located on osteoclasts and dendritic cells (DC) (4, 5). The effects of RANKL within the skeleton include osteoblast –osteoclast cross-talks, resulting in enhanced differentiation, fusion, activation, and survival of osteoclasts (3, 6), while in the immune system, RANKL promotes the survival and immunostimulatory capacity of DC (1, 7). OPG acts as a soluble decoy receptor that neutralizes RANKL, thus preventing activation of RANK (8). The RANKL/RANK/OPG system has been implicated in various skeletal and immune-mediated diseases characterized by increased bone resorption and bone loss, including several forms of osteoporosis (postmenopausal, glucocorticoid-induced, and senile osteoporosis) (9), bone metastases (10), periodontal disease (11), and rheumatoid arthritis (2). While a relative deficiency of OPG has been found to be associated with osteoporosis in various animal models (9), the parenteral administration of OPG to postmenopausal women (3 mg/kg) was beneficial in rapidly reducing enhanced biochemical markers of bone turnover by 30–80% (12). These studies have clearly established the RANKL/ OPG system as a key cytokine network involved in the regulation of bone cell biology, osteoblast–osteoclast and bone-immune cross-talks, and maintenance of bone mass. In addition to providing substantial and detailed insights into the pathogenesis of various metabolic bone diseases, the administration of OPG may become a promising therapeutic option in the prevention and treatment of benign and malignant bone disease. Several studies have attempted to evaluate the clinical relevance and potential applications of serum OPG measurements in humans. Yano et al. were the first to assess systematically OPG serum levels (by an ELISA system) in women with osteoporosis (13). Intriguingly, OPG serum levels were negatively correlated with bone mineral density (BMD) at various sites (lumbar spine, femoral neck, and total body) and positively correlated with biochemical markers of bone turnover. In view of the established protective effects of OPG on bone, these findings came as a surprise, and were interpreted as an insufficient counter-regulatory mechanism to prevent bone loss. Another group which employed a similar design (but a different OPG ELISA system) could not detect a correlation between OPG serum levels and biochemical markers of bone turnover (14), but confirmed the negative correlation of OPG serum concentrations with BMD in postmenopausal women (15). In a recent study, Szulc and colleagues (16) evaluated OPG serum levels in an age-stratified male cohort, and observed positive correlations of OPG serum levels with bioavailable testosterone and estrogen levels, negative correlations with parathyroid hormone (PTH) serum levels and urinary excretion of total deoxypyridinoline, but no correlation with BMD at any site (16). The finding that PTH serum levels and gene expression of OPG by bone cells are inversely correlated was also reported in postmenopausal women (17), and systemic administration of human PTH(1-34) to postmenopausal women with osteoporosis inhibited circulating OPG serum levels (18). Finally, a study of patients with renal diseases showed a decline of serum OPG levels following initiation of systemic glucocorticoid therapy (19). The regulation pattern of OPG by systemic hormones has been described in vitro, and has led to the hypothesis that most hormones and cytokines regulate bone resorption by modulating either RANKL, OPG, or both (9). Interestingly, several studies showed that serum OPG levels increased with ageing and were higher in postmenopausal women (who have an increased rate of bone loss) as compared with men, thus supporting the hypothesis of a counter-regulatory function of OPG in order to prevent further bone loss (13 –16). In this issue of the Journal, Ueland and associates (20) add another important piece to the picture of OPG regulation in humans in vivo. By studying well-characterized patient cohorts with endocrine and immune diseases such as Cushing’s syndrome, acromegaly, growth hormone deficiency, HIV infection, and common variable immunodeficiency (CVI), the investigators reported European Journal of Endocrinology (2001) 145 681–683 ISSN 0804-4643",
"title": ""
},
{
"docid": "25490c79c329980ac8e0d53bf0e4147d",
"text": "A generalized EEG-based Neural Fuzzy system to predict driver's drowsiness was proposed in this study. Driver's drowsy state monitoring system has been implicated as a causal factor for the safety driving issue, especially when the driver fell asleep or distracted in driving. However, the difficulties in developing such a system are lack of significant index for detecting the driver's drowsy state in real-time and the interference of the complicated noise in a realistic and dynamic driving environment. In our past studies, we found that the electroencephalogram (EEG) power spectrum changes were highly correlated with the driver's behavior performance especially the occipital component. Different from presented subject-dependent drowsy state monitor systems, whose system performance may decrease rapidly when different subject applies with the drowsiness detection model constructed by others, in this study, we proposed a generalized EEG-based Self-organizing Neural Fuzzy system to monitor and predict the driver's drowsy state with the occipital area. Two drowsiness prediction models, subject-dependent and generalized cross-subject predictors, were investigated in this study for system performance analysis. Correlation coefficients and root mean square errors are showed as the experimental results and interpreted the performances of the proposed system significantly better than using other traditional Neural Networks ( p-value <;0.038). Besides, the proposed EEG-based Self-organizing Neural Fuzzy system can be generalized and applied in the subjects' independent sessions. This unique advantage can be widely used in the real-life applications.",
"title": ""
},
{
"docid": "ae98f5863738aff79d2c39b59f308cbd",
"text": "Nowadays Kidney Disease is a growing problem in the world wide. Due to the high possibility of death within a short period of time, a patient must be hospitalized and appropriately cured. Many Data Mining techniques are used in the health care industry for predicting the Kidney Disease. The Data Mining techniques, namely SVM, Naive Bayes, Decision Tree, Classification, Neural Network are used to analyze the accuracy for the kidney related disease.",
"title": ""
},
{
"docid": "a4a2f60248085008a91e8c5f5d99ef36",
"text": "In process mining, precision measures are used to quantify how much a process model overapproximates the behavior seen in an event log. Although several measures have been proposed throughout the years, no research has been done to validate whether these measures achieve the intended aim of quantifying over-approximation in a consistent way for all models and logs. This paper fills this gap by postulating a number of axioms for quantifying precision consistently for any log and any model. Further, we show through counter-examples that none of the existing measures consistently quantifies precision.",
"title": ""
},
{
"docid": "c1fc1a31d9f5033a7469796d1222aef3",
"text": "Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more cameras are mounted on actuated mechanisms such as a gimbal. Existing methods for DCC calibration rely on joint angle measurements to resolve the time-varying transformation between the dynamic and static camera. This information is usually provided by motor encoders, however, joint angle measurements are not always readily available on off-the-shelf mechanisms. In this paper, we present an encoderless approach for DCC calibration which simultaneously estimates the kinematic parameters of the transformation chain as well as the unknown joint angles. We also demonstrate the integration of an encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show the extensions required in order to perform simultaneous online estimation of the joint angles and vehicle localization state. The proposed calibration approach is validated both in simulation and on a physical DCC composed of a 2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the calibrated mechanism integrated into the OKVIS VIO package, and demonstrate successful online joint angle estimation while maintaining localization accuracy that is comparable to a standard static multi-camera configuration.",
"title": ""
},
{
"docid": "ed351364658a99d4d9c10dd2b9be3c92",
"text": "Information technology continues to provide opportunities to alter the decisionmaking behavior of individuals, groups and organizations. Two related changes that are emerging are social media and Web 2.0 technologies. These technologies can positively and negatively impact the rationality and effectiveness of decision-making. For example, changes that help marketing managers alter consumer decision behavior may result in poorer decisions by consumers. Also, managers who heavily rely on a social network rather than expert opinion and facts may make biased decisions. A number of theories can help explain how social media may impact decision-making and the consequences.",
"title": ""
},
{
"docid": "4912a90f30127d2e70a2bbcb3733d524",
"text": "To better understand procrastination, researchers have sought to identify cognitive personality factors associated with it. The study reported here attempts to extend previous research by exploring the application of explanatory style to academic procrastination. Findings of the study are discussed from the perspective of employers of this new generation.",
"title": ""
}
] |
scidocsrr
|
d9f87fbeb4a51a47982c8440ef026a9e
|
Recursive Neural Networks Can Learn Logical Semantics
|
[
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
}
] |
[
{
"docid": "b35e0b492d45c78b2d973eb5c3530a0d",
"text": "Topic classification (TC) of short text messages offers an effective and fast way to reveal events happening around the world ranging from those related to Disaster (e.g. Sandy hurricane) to those related to Violence (e.g. Egypt revolution). Previous approaches to TC have mostly focused on exploiting individual knowledge sources (KS) (e.g. DBpedia or Freebase) without considering the graph structures that surround concepts present in KSs when detecting the topics of Tweets. In this paper we introduce a novel approach for harnessing such graph structures from multiple linked KSs, by: (i) building a conceptual representation of the KSs, (ii) leveraging contextual information about concepts by exploiting semantic concept graphs, and (iii) providing a principled way for the combination of KSs. Experiments evaluating our TC classifier in the context of Violence detection (VD) and Emergency Responses (ER) show promising results that significantly outperform various baseline models including an approach using a single KS without linked data and an approach using only Tweets.",
"title": ""
},
{
"docid": "dcd9a430a69fc3a938ea1068273627ff",
"text": "Background Nursing theory should provide the principles that underpin practice and help to generate further nursing knowledge. However, a lack of agreement in the professional literature on nursing theory confuses nurses and has caused many to dismiss nursing theory as irrelevant to practice. This article aims to identify why nursing theory is important in practice. Conclusion By giving nurses a sense of identity, nursing theory can help patients, managers and other healthcare professionals to recognise the unique contribution that nurses make to the healthcare service ( Draper 1990 ). Providing a definition of nursing theory also helps nurses to understand their purpose and role in the healthcare setting.",
"title": ""
},
{
"docid": "6b07c3fb97ab3a1001cf3753adb6754f",
"text": "• Starting with the fact that school education has failed to become education for critical thinking and that one of the reasons for that could be in how education for critical thinking is conceptualised, this paper presents: (1) an analysis of the predominant approach to education for critical thinking through the implementation of special programs and methods, and (2) an attempt to establish different approaches to education for critical thinking. The overview and analysis of understanding education for developing critical thinking as the implementation of special programs reveal that it is perceived as a decontextualised activity, reduced to practicing individual intellectual skills. Foundations for a different approach, which could be characterised as the ‘education for critical competencies’, are found in ideas of critical pedagogy and open curriculum theory. This approach differs from the predominant approach in terms of how the nature and purpose of critical thinking and education for critical thinking are understood. In the approach of education for critical competencies, it is not sufficient to introduce special programs and methods for the development of critical thinking to the existing educational system. This approach emphasises the need to question and reconstruct the status, role, and power of pupils and teachers in the teaching process, but also in the process of curriculum development.",
"title": ""
},
{
"docid": "b8d940b9b753c043da01dbcd737fdd58",
"text": "In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world). We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures. These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability (due to first-person visual observations), delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks. While these tasks are conceptually simple to describe, by virtue of having all of these challenges simultaneously they are difficult for current DRL architectures. Additionally, we evaluate the generalization performance of the architectures on environments not used during training. The experimental results show that our new architectures generalize to unseen environments better than existing DRL architectures.",
"title": ""
},
{
"docid": "7aaa9cb86b17fdd5672677eefb17bf76",
"text": "Although many methods are available to forecast short-term electricity load based on small scale data sets, they may not be able to accommodate large data sets as electricity load data becomes bigger and more complex in recent years. In this paper, a novel machine learning model combining convolutional neural network with K-means clustering is proposed for short-term load forecasting with improved scalability. The large data set is clustered into subsets using K-means algorithm, then the obtained subsets are used to train the convolutional neural network. A real-world power industry data set containing more than 1.4 million of load records is used in this study and the experimental results demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "2768cae9d76cd04eb7b4c82fceed470c",
"text": "In this paper we present a method for synthesizing English handwritten textlines from ASCII transcriptions. The method is based on templates of characters and the Delta LogNormal model of handwriting generation. To generate a textline, first a static image of the textline is built by concatenating perturbed versions of the character templates. Then strokes and corresponding virtual targets are extracted and randomly perturbed, and finally the textline is drawn using overlapping strokes and delta-lognormal velocity profiles in accordance with the Delta LogNormal theory. The generated textlines are used as training data for a hidden Markov model based off-line handwritten textline recognizer. First results show that adding such generated textlines to the natural training set may be beneficial.",
"title": ""
},
{
"docid": "9e13ee2693415e6597c54660d45a93bd",
"text": "Visual perception is a challenging problem in part due to illumination variations. A possible solution is to first estimate an illumination invariant representation before using it for recognition. The object albedo and surface normals are examples of such representations. In this paper, we introduce a multilayer generative model where the latent variables include the albedo, surface normals, and the light source. Combining Deep Belief Nets with the Lambertian reflectance assumption, our model can learn good priors over the albedo from 2D images. Illumination variations can be explained by changing only the lighting latent variable in our model. By transferring learned knowledge from similar objects, albedo and surface normals estimation from a single image is possible in our model. Experiments demonstrate that our model is able to generalize as well as improve over standard baselines in one-shot face recognition.",
"title": ""
},
{
"docid": "61556b092c6b5607e8bf2c556202570f",
"text": "The problem of recognizing actions in realistic videos is challenging yet absorbing owing to its great potentials in many practical applications. Most previous research is limited due to the use of simplified action databases under controlled environments or focus on excessively localized features without sufficiently encapsulating the spatio-temporal context. In this paper, we propose to model the spatio-temporal context information in a hierarchical way, where three levels of context are exploited in ascending order of abstraction: 1) point-level context (SIFT average descriptor), 2) intra-trajectory context (trajectory transition descriptor), and 3) inter-trajectory context (trajectory proximity descriptor). To obtain efficient and compact representations for the latter two levels, we encode the spatiotemporal context information into the transition matrix of a Markov process, and then extract its stationary distribution as the final context descriptor. Building on the multichannel nonlinear SVMs, we validate this proposed hierarchical framework on the realistic action (HOHA) and event (LSCOM) recognition databases, and achieve 27% and 66% relative performance improvements over the state-of-the-art results, respectively. We further propose to employ the Multiple Kernel Learning (MKL) technique to prune the kernels towards speedup in algorithm evaluation.",
"title": ""
},
{
"docid": "673674dd11047747db79e5614daa4974",
"text": "Distracted driving is one of the main causes of vehicle collisions in the United States. Passively monitoring a driver's activities constitutes the basis of an automobile safety system that can potentially reduce the number of accidents by estimating the driver's focus of attention. This paper proposes an inexpensive vision-based system to accurately detect Eyes Off the Road (EOR). The system has three main components: 1) robust facial feature tracking; 2) head pose and gaze estimation; and 3) 3-D geometric reasoning to detect EOR. From the video stream of a camera installed on the steering wheel column, our system tracks facial features from the driver's face. Using the tracked landmarks and a 3-D face model, the system computes head pose and gaze direction. The head pose estimation algorithm is robust to nonrigid face deformations due to changes in expressions. Finally, using a 3-D geometric analysis, the system reliably detects EOR.",
"title": ""
},
{
"docid": "44bda14c1d3ee29812acb450e51a9f87",
"text": "In this paper, the mathematical model of the posture inverse kinematics is established. According to the structure of 2DOF parallel manipulator, the simulation model of mechanism is built using the Matlab/SimMechanics. The kinematics simulation of the parallel manipulator is obtained and confirmed correct. With the Virtual Reality Toolbox, the virtual reality of the parallel manipulator is carried out. During the simulation, the motion animate is obtained. It indicates that Matlab/Simulink can greatly reduce the designer’s work and provide a powerful and convenient tool for the simulation and analysis of parallel manipulators. Keywordsparallel manipulator; posture inverse kinematics; simulation; SimMechanics; virtual reality.",
"title": ""
},
{
"docid": "7c5f2c92cb3d239674f105a618de99e0",
"text": "We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.",
"title": ""
},
{
"docid": "ff69af9c6ce771b0db8caeaa6da5478f",
"text": "The use of Internet as a mean of shopping goods and services is growing over the past decade. Businesses in the e-commerce sector realize that the key factors for success are not limited to the existence of a website and low prices but must also include high standards of e-quality. Research indicates that the attainment of customer satisfaction brings along plenty of benefits. Furthermore, trust is of paramount importance, in ecommerce, due to the fact that that its establishment can diminish the perceived risk of using an internet service. The purpose of this study is to investigate the impact of customer perceived quality of an internet shop on customers’ satisfaction and trust. In addition, the possible effect of customer satisfaction on trust is also examined. An explanatory research approach was adopted in order to identify causal relationships between e-quality, customer satisfaction and trust. This was accomplished through field research by utilizing an interviewer-administered questionnaire. The questionnaire was largely based on existing constructs in relative literature. E-quality was divided into 5 dimensions, namely ease of use, e-scape, customization, responsiveness, and assurance. After being successfully pilot-tested by the managers of 3 Greek companies developing ecommerce software, 4 managers of Greek internet shops and 5 internet shoppers, the questionnaire was distributed to internet shoppers in central Greece. This process had as a result a total of 171 correctly answered questionnaires. Reliability tests and statistical analyses were performed to both confirm scale reliability and test research hypotheses. The findings indicate that all the examined e-quality dimensions expose a significant positive influence on customer satisfaction, with ease of use, e-scape and assurance being the most important ones. One the other hand, rather surprisingly, the only e-quality dimension that proved to have a significant positive impact on trust was customization. Finally, satisfaction was revealed to have a significant positive relation with trust.",
"title": ""
},
{
"docid": "edfc15795f1f69d31c36f73c213d2b7d",
"text": "Three studies tested whether adopting strong (relative to weak) approach goals in relationships (i.e., goals focused on the pursuit of positive experiences in one's relationship such as fun, growth, and development) predict greater sexual desire. Study 1 was a 6-month longitudinal study with biweekly assessments of sexual desire. Studies 2 and 3 were 2-week daily experience studies with daily assessments of sexual desire. Results showed that approach relationship goals buffered against declines in sexual desire over time and predicted elevated sexual desire during daily sexual interactions. Approach sexual goals mediated the association between approach relationship goals and daily sexual desire. Individuals with strong approach goals experienced even greater desire on days with positive relationship events and experienced less of a decrease in desire on days with negative relationships events than individuals who were low in approach goals. In two of the three studies, the association between approach relationship goals and sexual desire was stronger for women than for men. Implications of these findings for maintaining sexual desire in long-term relationships are discussed.",
"title": ""
},
{
"docid": "7f6f959ada943050d23a61b299d0bd4a",
"text": "Light has profoundly influenced the evolution of life on earth. As widely appreciated, light enables us to generate images of our environment. However, light — through intrinsically photosensitive retinal ganglion cells (ipRGCs) — also influences behaviours that are essential for our health and quality of life but are independent of image formation. These include the synchronization of the circadian clock to the solar day, tracking of seasonal changes and the regulation of sleep. Irregular light environments lead to problems in circadian rhythms and sleep, which eventually cause mood and learning deficits. Recently, it was found that irregular light can also directly affect mood and learning without producing major disruptions in circadian rhythms and sleep. In this Review, we discuss the indirect and direct influence of light on mood and learning, and provide a model for how light, the circadian clock and sleep interact to influence mood and cognitive functions.",
"title": ""
},
{
"docid": "1c0d075c345998a639333b68b3590652",
"text": "This half-day tutorial aims to introduce the fundamental concepts, principles and methods of visualizing and exploring the development of a scientific knowledge domain. The tutorial explains the design rationale and various applications of CiteSpace ' a freely available tool for interactive and exploratory analysis of the evolution of a scientific domain, ranging from a single specialty to multiple interrelated scientific frontiers. The tutorial demonstrates the analytic procedure of applying CiteSpace to a diverse range of examples and how one may interpret various patterns and trends revealed by interactive visual analytics.magnetic field, applied along the easy axis of the elements.",
"title": ""
},
{
"docid": "8c00b522ae9429f6f9cd7fb7174578ec",
"text": "We extend photometric stereo to make it work with internet images, which are typically associated with different viewpoints and significant noise. For popular tourism sites, thousands of images can be obtained from internet search engines. With these images, our method computes the global illumination for each image and the surface orientation at some scene points. The illumination information can then be used to estimate the weather conditions (such as sunny or cloudy) for each image, since there is a strong correlation between weather and scene illumination. We demonstrate our method on several challenging examples.",
"title": ""
},
{
"docid": "d06e4f97786f8ecf9694ed270a36c24a",
"text": "In this paper, an improved maximum power point (MPP) tracking (MPPT) with better performance based on voltage-oriented control (VOC) is proposed to solve a fast-changing irradiation problem. In VOC, a cascaded control structure with an outer dc link voltage control loop and an inner current control loop is used. The currents are controlled in a synchronous orthogonal d,q frame using a decoupled feedback control. The reference current of proportional-integral (PI) d-axis controller is extracted from the dc-side voltage regulator by applying the energy-balancing control. Furthermore, in order to achieve a unity power factor, the q-axis reference is set to zero. The MPPT controller is applied to the reference of the outer loop control dc voltage photovoltaic (PV). Without PV array power measurement, the proposed MPPT identifies the correct direction of the MPP by processing the d-axis current reflecting the power grid side and the signal error of the PI outer loop designed to only represent the change in power due to the changing atmospheric conditions. The robust tracking capability under rapidly increasing and decreasing irradiance is verified experimentally with a PV array emulator. Simulations and experimental results demonstrate that the proposed method provides effective, fast, and perfect tracking.",
"title": ""
},
{
"docid": "e6633bf0c5f2fd18f739a7f3a1751854",
"text": "Image inpainting in wavelet domain refers to the recovery of an image from incomplete and/or inaccurate wavelet coefficients. To reconstruct the image, total variation (TV) models have been widely used in the literature and they produce high-quality reconstructed images. In this paper, we consider an unconstrained TV-regularized, l2-data-fitting model to recover the image. The model is solved by the alternating direction method (ADM). At each iteration, ADM needs to solve three subproblems, all of which have closed-form solutions. The per-iteration computational cost of ADM is dominated by two Fourier transforms and two wavelet transforms, all of which admit fast computation. Convergence of the ADM iterative scheme is readily obtained. We also discuss extensions of this ADM scheme to solving two closely related constrained models. We present numerical results to show the efficiency and stability of ADM for solving wavelet domain image inpainting problems. Numerical comparison results of ADM with some recent algorithms are also reported.",
"title": ""
},
{
"docid": "d3afbb88f0575bd18365c85c6faea868",
"text": "The present paper examines the causal linkage between foreign direct investment (FDI), financial development, and economic growth in a panel of 4 countries of North Africa (Tunisia, Morocco, Algeria and Egypt) over the period 1980-2011. The study moves away from the traditional cross-sectional analysis, and focuses on more direct evidence of the channels through which FDI inflows can promote economic growth of the host country. Using Generalized Method of Moment (GMM) panel data analysis, we find strong evidence of a positive relationship between FDI and economic growth. We also find evidence that the development of the domestic financial system is an important prerequisite for FDI to have a positive effect on economic growth. The policy implications of this study appeared clear. Improvement efforts need to be driven by local-level reforms to ensure the development of domestic financial system in order to maximize the benefits of the presence of FDI.",
"title": ""
},
{
"docid": "3d98dba389124835ebd0dd7fec472719",
"text": "We present a mobile robot navigation system guided by a novel vision-based road recognition approach. The system represents the road as a set of lines extrapolated from the detected image contour segments. These lines enable the robot to maintain its heading by centering the vanishing point in its field of view, and to correct the long term drift from its original lateral position. We integrate odometry and our visual road recognition system into a grid-based local map that estimates the robot pose as well as its surroundings to generate a movement path. Our road recognition system is able to estimate the road center on a standard dataset with 25,076 images to within 11.42 cm (with respect to roads at least 3 m wide). It outperforms three other state-of-the-art systems. In addition, we extensively test our navigation system in four busy college campus environments using a wheeled robot. Our tests cover more than 5 km of autonomous driving without failure. This demonstrates robustness of the proposed approach against challenges that include occlusion by pedestrians, non-standard complex road markings and shapes, shadows, and miscellaneous obstacle objects.",
"title": ""
}
] |
scidocsrr
|
efbe7c744693e9aac16e66d9aee8b2ef
|
Distance and similarity measures for hesitant fuzzy sets
|
[
{
"docid": "82592f60e0039089e3c16d9534780ad5",
"text": "A model for grey-tone image enhancement using the concept of fuzzy sets is suggested. It involves primary enhancement, smoothing, and then final enhancement. The algorithm for both the primary and final enhancements includes the extraction of fuzzy properties corresponding to pixels and then successive applications of the fuzzy operator \"contrast intensifier\" on the property plane. The three different smoothing techniques considered in the experiment are defocussing, averaging, and max-min rule over the neighbors of a pixel. The reduction of the \"index of fuzziness\" and \"entropy\" for different enhanced outputs (corresponding to different values of fuzzifiers) is demonstrated for an English script input. Enhanced output as obtained by histogram modification technique is also presented for comparison.",
"title": ""
}
] |
[
{
"docid": "85d7ff422f9753543494f6a1c4bdf21c",
"text": "Early in the last century, 3 events put Colorado in the orthodontic spotlight: the discovery-by an orthodontist-of the caries-preventive powers of fluoridated water, the formation of dentistry's first specialty board, and the founding of a supply company by and for orthodontists. Meanwhile, inventive practitioners were giving the profession more choices of treatment modalities, and stainless steel was making its feeble debut.",
"title": ""
},
{
"docid": "8824f01def7d13db2e436c074d459676",
"text": "In this paper, numerical treatment is presented for the solution of boundary value problems of one-dimensional Bratu-type equations using artificial neural networks. Three types of transfer functions including Log-sigmoid, radial basis, and tan-sigmoid are used in the neural networks’ modeling. The optimum weights for all the three networks are searched with the interior point method. Various test cases of Bratu-type equations have been simulated using the developed models. The accuracy, convergence, and effectiveness of the methods are substantiated by a large number of simulation data for each model by taking enough independent runs.",
"title": ""
},
{
"docid": "138d45574cee04ff8fa3020f5fe85a21",
"text": "Physical contact between melanocytes and keratinocytes is a prerequisite for melanosome transfer to occur, but cellular signals induced during or after contact are not fully understood. Herein, it is shown that interactions between melanocyte and keratinocyte plasma membranes induced a transient intracellular calcium signal in keratinocytes that was required for pigment transfer. This intracellular calcium signal occurred due to release of calcium from intracellular stores. Pigment transfer observed in melanocyte-keratinocyte co-cultures was inhibited when intracellular calcium in keratinocytes was chelated. We propose that a 'ligand-receptor' type interaction exists between melanocytes and keratinocytes that triggers intracellular calcium signalling in keratinocytes and mediates melanin transfer.",
"title": ""
},
{
"docid": "196ddcefb2c3fcb6edd5e8d108f7e219",
"text": "This paper may be considered as a practical reference for those who wish to add (now sufficiently matured) Agent Based modeling to their analysis toolkit and may or may not have some System Dynamics or Discrete Event modeling background. We focus on systems that contain large numbers of active objects (people, business units, animals, vehicles, or even things like projects, stocks, products, etc. that have timing, event ordering or other kind of individual behavior associated with them). We compare the three major paradigms in simulation modeling: System Dynamics, Discrete Event and Agent Based Modeling with respect to how they approach such systems. We show in detail how an Agent Based model can be built from an existing System Dynamics or a Discrete Event model and then show how easily it can be further enhanced to capture much more complicated behavior, dependencies and interactions thus providing for deeper insight in the system being modeled. Commonly understood examples are used throughout the paper; all models are specified in the visual language supported by AnyLogic tool. We view and present Agent Based modeling not as a substitution to older modeling paradigms but as a useful add-on that can be efficiently combined with System Dynamics and Discrete Event modeling. Several multi-paradigm model architectures are suggested.",
"title": ""
},
{
"docid": "1f28ca58aabd0e2523492308c4da3929",
"text": "Sepsis is a leading cause of in-hospital death over the world and septic shock, the most severe complication of sepsis, reaches a mortality rate as high as 50%. Early diagnosis and treatment can prevent most morbidity and mortality. In this work, Recent Temporal Patterns (RTPs) are used in conjunction with SVM classifier to build a robust yet interpretable model for early diagnosis of septic shock. This model is applied to two different prediction tasks: visit-level early diagnosis and event-level early prediction. For each setting, this model is compared against several strong baselines including atemporal method called Last-Value, six classic machine learning algorithms, and lastly, a state-of-the-art deep learning model: Long Short-Term Memory (LSTM). Our results suggest that RTP-based model can outperform all aforementioned baseline models for both diagnosis tasks. More importantly, the extracted interpretative RTPs can shed lights for the clinicians to discover progression behavior and latent patterns among septic shock patients.",
"title": ""
},
{
"docid": "3cc0707cec7af22db42e530399e762a8",
"text": "While watching television, people increasingly consume additional content related to what they are watching. We consider the task of finding video content related to a live television broadcast for which we leverage the textual stream of subtitles associated with the broadcast. We model this task as a Markov decision process and propose a method that uses reinforcement learning to directly optimize the retrieval effectiveness of queries generated from the stream of subtitles. Our dynamic query modeling approach significantly outperforms state-of-the-art baselines for stationary query modeling and for text-based retrieval in a television setting. In particular we find that carefully weighting terms and decaying these weights based on recency significantly improves effectiveness. Moreover, our method is highly efficient and can be used in a live television setting, i.e., in near real time.",
"title": ""
},
{
"docid": "4ac15541b7d1f77f55da749e3871efea",
"text": "Acidovorax avenae subsp. citrulli is the causal agent of bacterial fruit blotch (BFB), a threatening disease of watermelon, melon, and other cucurbits. Despite the economic importance of BFB, relatively little is known about basic aspects of the pathogen's biology and the molecular basis of its interaction with host plants. To identify A. avenae subsp. citrulli genes associated with pathogenicity, we generated a transposon (Tn5) mutant library on the background of strain M6, a group I strain of A. avenae subsp. citrulli, and screened it for reduced virulence by seed-transmission assays with melon. Here, we report the identification of a Tn5 mutant with reduced virulence that is impaired in pilM, which encodes a protein involved in assembly of type IV pili (TFP). Further characterization of this mutant revealed that A. avenae subsp. citrulli requires TFP for twitching motility and wild-type levels of biofilm formation. Significant reductions in virulence and biofilm formation as well as abolishment of twitching were also observed in insertional mutants affected in other TFP genes. We also provide the first evidence that group I strains of A. avenae subsp. citrulli can colonize and move through host xylem vessels.",
"title": ""
},
{
"docid": "b214270aacf9c9672af06e58ff26aa5a",
"text": "Traditional techniques for measuring similarities between time series are based on handcrafted similarity measures, whereas more recent learning-based approaches cannot exploit external supervision. We combine ideas from timeseries modeling and metric learning, and study siamese recurrent networks (SRNs) that minimize a classification loss to learn a good similarity measure between time series. Specifically, our approach learns a vectorial representation for each time series in such a way that similar time series are modeled by similar representations, and dissimilar time series by dissimilar representations. Because it is a similarity prediction models, SRNs are particularly well-suited to challenging scenarios such as signature recognition, in which each person is a separate class and very few examples per class are available. We demonstrate the potential merits of SRNs in withindomain and out-of-domain classification experiments and in one-shot learning experiments on tasks such as signature, voice, and sign language recognition.",
"title": ""
},
{
"docid": "ae4ffd43ea098581aa1d1980e61ebe6c",
"text": "In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this position paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS we propose to combine the learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared to past and current research efforts in this area, the technical approach depicted in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication networks.",
"title": ""
},
{
"docid": "6a143e9aab34836fc34ffcd6cc9d1096",
"text": "MOTIVATION\nDNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory due to the lack of a systematic framework that can accommodate noise, variability, and low replication often typical of microarray data.\n\n\nRESULTS\nWe develop a Bayesian probabilistic framework for microarray data analysis. At the simplest level, we model log-expression values by independent normal distributions, parameterized by corresponding means and variances with hierarchical prior distributions. We derive point estimates for both parameters and hyperparameters, and regularized expressions for the variance of each gene by combining the empirical variance with a local background variance associated with neighboring genes. An additional hyperparameter, inversely related to the number of empirical observations, determines the strength of the background variance. Simulations show that these point estimates, combined with a t -test, provide a systematic inference approach that compares favorably with simple t -test or fold methods, and partly compensate for the lack of replication.",
"title": ""
},
{
"docid": "ffc2db2f3762b77af679f2a757bbc745",
"text": "We study, for the first time, automated inference on criminality based solely on still face images. Via supervised machine learning, we build four classifiers (logistic regression, KNN, SVM, CNN) using facial images of 1856 real persons controlled for race, gender, age and facial expressions, nearly half of whom were convicted criminals, for discriminating between criminals and non-criminals. All four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic. Also, we find some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle. Above all, the most important discovery of this research is that criminal and non-criminal face images populate two quite distinctive manifolds. The variation among criminal faces is significantly greater than that of the non-criminal faces. The two manifolds consisting of criminal and non-criminal faces appear to be concentric, with the non-criminal manifold lying in the kernel with a smaller span, exhibiting a law of normality for faces of non-criminals. In other words, the faces of general law-biding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people.",
"title": ""
},
{
"docid": "d15dc60ef2fb1e6096a3aba372698fd9",
"text": "One of the most interesting applications of Industry 4.0 paradigm is enhanced process control. Traditionally, process control solutions based on Cyber-Physical Systems (CPS) consider a top-down view where processes are represented as executable high-level descriptions. However, most times industrial processes follow a bottom-up model where processes are executed by low-level devices which are hard-programmed with the process to be executed. Thus, high-level components only may supervise the process execution as devices cannot modify dynamically their behavior. Therefore, in this paper we propose a vertical CPS-based solution (including a reference and a functional architecture) adequate to perform enhanced process control in Industry 4.0 scenarios with a bottom-up view. The proposed solution employs an event-driven service-based architecture where control is performed by means of finite state machines. Furthermore, an experimental validation is provided proving that in more than 97% of cases the proposed solution allows a stable and effective control.",
"title": ""
},
{
"docid": "8bc615dfa51a9c5835660c1b0eb58209",
"text": "Large scale grid connected photovoltaic (PV) energy conversion systems have reached the megawatt level. This imposes new challenges on existing grid interface converter topologies and opens new opportunities to be explored. In this paper a new medium voltage multilevel-multistring configuration is introduced based on a three-phase cascaded H-bridge (CHB) converter and multiple string dc-dc converters. The proposed configuration enables a large increase of the total capacity of the PV system, while improving power quality and efficiency. The converter structure is very flexible and modular since it decouples the grid converter from the PV string converter, which allows to accomplish independent control goals. The main challenge of the proposed configuration is to handle the inherent power imbalances that occur not only between the different cells of one phase of the converter but also between the three phases. The control strategy to deal with these imbalances is also introduced in this paper. Simulation results of a 7-level CHB for a multistring PV system are presented to validate the proposed topology and control method.",
"title": ""
},
{
"docid": "e870f2fe9a26b241bdeca882b6186169",
"text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd recommender systems handbook as the choice of reading, you can find here.",
"title": ""
},
{
"docid": "d657085072f829db812a2735d0e7f41c",
"text": "Recently, increasing attention has been drawn to training semantic segmentation models using synthetic data and computer-generated annotation. However, domain gap remains a major barrier and prevents models learned from synthetic data from generalizing well to real-world applications. In this work, we take the advantage of additional geometric information from synthetic data, a powerful yet largely neglected cue, to bridge the domain gap. Such geometric information can be generated easily from synthetic data, and is proven to be closely coupled with semantic information. With the geometric information, we propose a model to reduce domain shift on two levels: on the input level, we augment the traditional image translation network with the additional geometric information to translate synthetic images into realistic styles; on the output level, we build a task network which simultaneously performs depth estimation and semantic segmentation on the synthetic data. Meanwhile, we encourage the network to preserve the correlation between depth and semantics by adversarial training on the output space. We then validate our method on two pairs of synthetic to real dataset: Virtual KITTI→KITTI, and SYNTHIA→Cityscapes, where we achieve a significant performance gain compared to the non-adaptive baseline and methods without using geometric information. This demonstrates the usefulness of geometric information from synthetic data for cross-domain semantic segmentation.",
"title": ""
},
{
"docid": "310aa0a02f8fc8b7b6d31c987a12a576",
"text": "We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.",
"title": ""
},
{
"docid": "107bb53e3ceda3ee29fc348febe87f11",
"text": "The objective here is to develop a flat surface area measuring system which is used to calculate the surface area of any irregular sheet. The irregular leather sheet is used in this work. The system is self protected by user name and password set through software for security purpose. Only authorize user can enter into the system by entering the valid pin code. After entering into the system, the user can measure the area of any irregular sheet, monitor and control the system. The heart of the system is Programmable Logic Controller (Master K80S) which controls the complete working of the system. The controlling instructions for the system are given through the designed Human to Machine Interface (HMI). For communication purpose the GSM modem is also interfaced with the Programmable Logic Controller (PLC). The remote user can also monitor the current status of the devices by sending SMS message to the GSM modem.",
"title": ""
},
{
"docid": "a8534157b31e858b5825acd8f4fff269",
"text": "In recent years, the Smart City concept is emerging as a way to increase efficiency, reduce costs, and improve the overall quality of citizen life. The rise of Smart City solutions is encouraged by the increasing availability of Internet of Things (IoT) devices and crowd sensing technologies. This paper presents an IoT Crowd Sensing platform that offers a set of services to citizens by exploiting a network of bicycles as IoT probes. Based on a survey conducted to identify the most interesting bike-enabled services, the SmartBike platform provides: real time remote geo-location of users’ bikes, anti-theft service, information about traveled route, and air pollution monitoring. The proposed SmartBike platform is composed of three main components: the SmartBike mobile sensors for data collection installed on the bicycle; the end-user devices implementing the user interface for geo-location and anti-theft; and the SmartBike central servers for storing and processing detected data and providing a web interface for data visualization. The suitability of the platform was evaluated through the implementation of an initial prototype. Results demonstrate that the proposed SmartBike platform is able to provide the stated services, and, in addition, that the accuracy of the acquired air quality measurements is compatible with the one provided by the official environmental monitoring system of the city of Turin. The described platform will be adopted within a project promoted by the city of Turin, that aims at helping people making their mobility behavior more sustainable.",
"title": ""
},
{
"docid": "d911ccb1bbb761cbfee3e961b8732534",
"text": "This paper presents a study on SIFT (Scale Invariant Feature transform) which is a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. There are various applications of SIFT that includes object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.",
"title": ""
},
{
"docid": "8cd62b12b4406db29b289a3e1bd5d05a",
"text": "Humor generation is a very hard problem in the area of computational humor. In this paper, we present a joke generation model based on neural networks. The model can generate a short joke relevant to the topic that the user specifies. Inspired by the architecture of neural machine translation and neural image captioning, we use an encoder for representing user-provided topic information and an RNN decoder for joke generation. We trained the model by short jokes of Conan O’Brien with the help of POS Tagger. We evaluate the performance of our model by human ratings from five English speakers. In terms of the average score, our model outperforms a probabilistic model that puts words into slots in a fixed-structure sentence.",
"title": ""
}
] |
scidocsrr
|
0ce205f7cf837636edeb38b59a4c0ab4
|
Learning Deep Representations of Fine-Grained Visual Descriptions
|
[
{
"docid": "a2f91e55b5096b86f6fa92e701c62898",
"text": "The main question we address in this paper is how to use purely textual description of categories with no training images to learn visual classifiers for these categories. We propose an approach for zero-shot learning of object categories where the description of unseen categories comes in the form of typical text such as an encyclopedia entry, without the need to explicitly defined attributes. We propose and investigate two baseline formulations, based on regression and domain adaptation. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the classifier parameters for new classes. We applied the proposed approach on two fine-grained categorization datasets, and the results indicate successful classifier prediction.",
"title": ""
}
] |
[
{
"docid": "930787311add9bede553c6f52f420fb9",
"text": "Prior research has demonstrated that interpersonal trust is critical to knowledge transfer in organizational settings. Yet there has been only limited systematic empirical work examining factors that promote a knowledge seekers trust in a knowledge source. We propose three categories of variables that affect interpersonal trust in this context: attributes of the relationship between the knowledge seeker and source; attributes of the knowledge source; and attributes of the knowledge seeker. We analyzed these multilevel data simultaneously with hierarchical linear modeling (HLM) using survey data from three companies in different industries and countries. We found that (1) variables in all three categories were statistically significant, with the biggest effect coming from more malleable features such as the cognitive dimension of social capital (i.e., shared vision and shared language), and little or no effect from more stable and visible features such as formal structure and demographic similarity; (2) benevolence-based trust was easier to predict than competence-based trust, both in terms of the number of significant predictors and the variance accounted for; and (3) knowledge seekers reliance on knowledgesource behaviors in determining how much to trust a sources competencethe so-called clues for competencewere relied on even more heavily by knowledge seekers with more division tenure, suggesting that certain attitudes in the trust realm may solidify over time.",
"title": ""
},
{
"docid": "aa88b71c68ed757faf9eb896a81003f5",
"text": "Purpose The present study evaluated the platelet distribution pattern and growth factor release (VEGF, TGF-β1 and EGF) within three PRF (platelet-rich-fibrin) matrices (PRF, A-PRF and A-PRF+) that were prepared using different relative centrifugation forces (RCF) and centrifugation times. Materials and methods immunohistochemistry was conducted to assess the platelet distribution pattern within three PRF matrices. The growth factor release was measured over 10 days using ELISA. Results The VEGF protein content showed the highest release on day 7; A-PRF+ showed a significantly higher rate than A-PRF and PRF. The accumulated release on day 10 was significantly higher in A-PRF+ compared with A-PRF and PRF. TGF-β1 release in A-PRF and A-PRF+ showed significantly higher values on days 7 and 10 compared with PRF. EGF release revealed a maximum at 24 h in all groups. Toward the end of the study, A-PRF+ demonstrated significantly higher EGF release than PRF. The accumulated growth factor releases of TGF-β1 and EGF on day 10 were significantly higher in A-PRF+ and A-PRF than in PRF. Moreover, platelets were located homogenously throughout the matrix in the A-PRF and A-PRF+ groups, whereas platelets in PRF were primarily observed within the lower portion. Discussion the present results show an increase growthfactor release by decreased RCF. However, further studies must be conducted to examine the extent to which enhancing the amount and the rate of released growth factors influence wound healing and biomaterial-based tissue regeneration. Conclusion These outcomes accentuate the fact that with a reduction of RCF according to the previously LSCC (described low speed centrifugation concept), growth factor release can be increased in leukocytes and platelets within the solid PRF matrices.",
"title": ""
},
{
"docid": "35a0a4cdba6fbab9f02bf4e50aace306",
"text": "This paper analyzes task assignment for heterogeneous air vehicles using a guaranteed conflict-free assignment algorithm, the Consensus Based Bundle Algorithm (CBBA). We extend this recently proposed algorithm to handle two realistic multiUAV operational complications. Our first extension accounts for obstacle regions in order to generate collision free paths for UAVs. Our second extension reduces task planner sensitivity to sensor measurement noise, and thereby minimizes churning behavior in flight paths. After integrating our enhanced CBBA module with a 3D visualization and interaction software tool, we simulate multiple aircraft servicing stationary and moving ground targets. Preliminary simulation results establish that consistent, conflict-free multi-UAV path assignments can be calculated on the order of a few seconds. The enhanced CBBA consequently demonstrates significant potential for real-time performance in stressing environments.",
"title": ""
},
{
"docid": "443191f41aba37614c895ba3533f80ed",
"text": "De novo engineering of gene circuits inside cells is extremely difficult, and efforts to realize predictable and robust performance must deal with noise in gene expression and variation in phenotypes between cells. Here we demonstrate that by coupling gene expression to cell survival and death using cell–cell communication, we can programme the dynamics of a population despite variability in the behaviour of individual cells. Specifically, we have built and characterized a ‘population control’ circuit that autonomously regulates the density of an Escherichia coli population. The cell density is broadcasted and detected by elements from a bacterial quorum-sensing system, which in turn regulate the death rate. As predicted by a simple mathematical model, the circuit can set a stable steady state in terms of cell density and gene expression that is easily tunable by varying the stability of the cell–cell communication signal. This circuit incorporates a mechanism for programmed death in response to changes in the environment, and allows us to probe the design principles of its more complex natural counterparts.",
"title": ""
},
{
"docid": "8c636402670a00e993efc66f419540f6",
"text": "Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each example according to a current hypothesis. Then the learner updates the hypothesis, if necessary, based on the correct classification of the example. One natural measure of the quality of learning in this setting is the number of mistakes the learner makes. For suitable classes of functions, learning algorithms are available that make a bounded number of mistakes, with the bound independent of the number of examples seen by the learner. We present one such algorithm that learns disjunctive Boolean functions, along with variants for learning other classes of Boolean functions. The basic method can be expressed as a linear-threshold algorithm. A primary advantage of this algorithm is that the number of mistakes grows only logarithmically with the number of irrelevant attributes in the examples. At the same time, the algorithm is computationally efficient in both time and space.",
"title": ""
},
{
"docid": "4adfa3026fbfceca68a02ee811d8a302",
"text": "Designing a new domain specific language is as any other complex task sometimes error-prone and usually time consuming, especially if the language shall be of high-quality and comfortably usable. Existing tool support focuses on the simplification of technical aspects but lacks support for an enforcement of principles for a good language design. In this paper we investigate guidelines that are useful for designing domain specific languages, largely based on our experience in developing languages as well as relying on existing guidelines on general purpose (GPLs) and modeling languages. We defined guidelines to support a DSL developer to achieve better quality of the language design and a better acceptance among its users.",
"title": ""
},
{
"docid": "a8fb6ca739d0d1e75b8b94302f2139a2",
"text": "OBJECTIVE\nTo assess the conditions under which employing an overview of systematic reviews is likely to lead to a high risk of bias.\n\n\nSTUDY DESIGN\nTo synthesise existing guidance concerning overview practice, a scoping review was conducted. Four electronic databases were searched with a pre-specified strategy (PROSPERO 2015:CRD42015027592) ending October 2015. Included studies needed to describe or develop overview methodology. Data were narratively synthesised to delineate areas highlighted as outstanding challenges or where methodological recommendations conflict.\n\n\nRESULTS\nTwenty-four papers met the inclusion criteria. There is emerging debate regarding overlapping systematic reviews; systematic review scope; quality of included research; updating; and synthesizing and reporting results. While three functions for overviews have been proposed-identify gaps, explore heterogeneity, summarize evidence-overviews cannot perform the first; are unlikely to achieve the second and third simultaneously; and can only perform the third under specific circumstances. Namely, when identified systematic reviews meet the following four conditions: (1) include primary trials that do not substantially overlap, (2) match overview scope, (3) are of high methodological quality, and (4) are up-to-date.\n\n\nCONCLUSION\nConsidering the intended function of proposed overviews with the corresponding methodological conditions may improve the quality of this burgeoning publication type. Copyright © 2017 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "4219836dc38e96a142e3b73cdf87e234",
"text": "BACKGROUND\nNIATx200, a quality improvement collaborative, involved 201 substance abuse clinics. Each clinic was randomized to one of four implementation strategies: (a) interest circle calls, (b) learning sessions, (c) coach only or (d) a combination of all three. Each strategy was led by NIATx200 coaches who provided direct coaching or facilitated the interest circle and learning session interventions.\n\n\nMETHODS\nEligibility was limited to NIATx200 coaches (N = 18), and the executive sponsor/change leader of participating clinics (N = 389). Participants were invited to complete a modified Grasha Riechmann Student Learning Style Survey and Teaching Style Inventory. Principal components analysis determined participants' preferred learning and teaching styles.\n\n\nRESULTS\nResponses were received from 17 (94.4 %) of the coaches. Seventy-two individuals were excluded from the initial sample of change leaders and executive sponsors (N = 389). Responses were received from 80 persons (25.2 %) of the contactable individuals. Six learning profiles for the executive sponsors and change leaders were identified: Collaborative/Competitive (N = 28, 36.4 %); Collaborative/Participatory (N = 19, 24.7 %); Collaborative only (N = 17, 22.1 %); Collaborative/Dependent (N = 6, 7.8 %); Independent (N = 3, 5.2 %); and Avoidant/Dependent (N = 3, 3.9 %). NIATx200 coaches relied primarily on one of four coaching profiles: Facilitator (N = 7, 41.2 %), Facilitator/Delegator (N = 6, 35.3 %), Facilitator/Personal Model (N = 3, 17.6 %) and Delegator (N = 1, 5.9 %). Coaches also supported their primary coaching profiles with one of eight different secondary coaching profiles.\n\n\nCONCLUSIONS\nThe study is one of the first to assess teaching and learning styles within a QIC. Results indicate that individual learners (change leaders and executive sponsors) and coaches utilize multiple approaches in the teaching and practice-based learning of quality improvement (QI) processes. Identification teaching profiles could be used to tailor the collaborative structure and content delivery. Efforts to accommodate learning styles would facilitate knowledge acquisition enhancing the effectiveness of a QI collaborative to improve organizational processes and outcomes.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov Identifier: NCT00934141 Registered July 6, 2009. Retrospectively registered.",
"title": ""
},
{
"docid": "de024671f84d853ac3bb7735a4497f1f",
"text": "Neural networks for natural language reasoning have largely focused on extractive, fact-based question-answering (QA) and common-sense inference. However, it is also crucial to understand the extent to which neural networks can perform relational reasoning and combinatorial generalization from natural language—abilities that are often obscured by annotation artifacts and the dominance of language modeling in standard QA benchmarks. In this work, we present a novel benchmark dataset for language understanding that isolates performance on relational reasoning. We also present a neural message-passing baseline and show that this model, which incorporates a relational inductive bias, is superior at combinatorial generalization compared to a traditional recurrent neural network approach.",
"title": ""
},
{
"docid": "6537921976c2779d1e7d921c939ec64d",
"text": "Stencil computation sweeps over a spatial grid over multiple time steps to perform nearest-neighbor computations. The bandwidth-to-compute requirement for a large class of stencil kernels is very high, and their performance is bound by the available memory bandwidth. Since memory bandwidth grows slower than compute, the performance of stencil kernels will not scale with increasing compute density. We present a novel 3.5D-blocking algorithm that performs 2.5D-spatial and temporal blocking of the input grid into on-chip memory for both CPUs and GPUs. The resultant algorithm is amenable to both thread- level and data-level parallelism, and scales near-linearly with the SIMD width and multiple-cores. Our performance numbers are faster or comparable to state-of-the-art-stencil implementations on CPUs and GPUs. Our implementation of 7-point-stencil is 1.5X-faster on CPUs, and 1.8X faster on GPUs for single- precision floating point inputs than previously reported numbers. For Lattice Boltzmann methods, the corresponding speedup number on CPUs is 2.1X.",
"title": ""
},
{
"docid": "d9605c1cde4c40d69c2faaea15eb466c",
"text": "A magnetically tunable ferrite-loaded substrate integrated waveguide (SIW) cavity resonator is presented and demonstrated. X-band cavity resonator is operated in the dominant mode and the ferrite slabs are loaded onto the side walls of the cavity where the value of magnetic field is highest. Measured results for single and double ferrite-loaded SIW cavity resonators are presented. Frequency tuning range of more than 6% and 10% for single and double ferrite slabs are obtained. Unloaded Q -factor of more than 200 is achieved.",
"title": ""
},
{
"docid": "5f4b0c833e7a542eb294fa2d7a305a16",
"text": "Security awareness is an often-overlooked factor in an information security program. While organizations expand their use of advanced security technology and continuously train their security professionals, very little is used to increase the security awareness among the normal users, making them the weakest link in any organization. As a result, today, organized cyber criminals are putting significant efforts to research and develop advanced hacking methods that can be used to steal money and information from the general public. Furthermore, the high internet penetration growth rate in the Middle East and the limited security awareness among users is making it an attractive target for cyber criminals. In this paper, we will show the need for security awareness programs in schools, universities, governments, and private organizations in the Middle East by presenting results of several security awareness studies conducted among students and professionals in UAE in 2010. This includes a comprehensive wireless security survey in which thousands of access points were detected in Dubai and Sharjah most of which are either unprotected or employ weak types of protection. Another study focuses on evaluating the chances of general users to fall victims to phishing attacks which can be used to steal bank and personal information. Furthermore, a study of the user’s awareness of privacy issues when using RFID technology is presented. Finally, we discuss several key factors that are necessary to develop a successful information security awareness program.",
"title": ""
},
{
"docid": "565a6f620f9ccd33b6faa5a7f37df188",
"text": "Fog computing (FC) and Internet of Everything (IoE) are two emerging technological paradigms that, to date, have been considered standing-alone. However, because of their complementary features, we expect that their integration can foster a number of computing and network-intensive pervasive applications under the incoming realm of the future Internet. Motivated by this consideration, the goal of this position paper is fivefold. First, we review the technological attributes and platforms proposed in the current literature for the standing-alone FC and IoE paradigms. Second, by leveraging some use cases as illustrative examples, we point out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming, while introducing new open issues. Third, we propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, that integrates FC and IoE and then we detail the main building blocks and services of the corresponding technological platform and protocol stack. Fourth, as a proof-of-concept, we present the simulated energy-delay performance of a small-scale FoE prototype, namely, the V-FoE prototype. Afterward, we compare the obtained performance with the corresponding one of a benchmark technological platform, e.g., the V-D2D one. It exploits only device-to-device links to establish inter-thing “ad hoc” communication. Last, we point out the position of the proposed FoE paradigm over a spectrum of seemingly related recent research projects.",
"title": ""
},
{
"docid": "84ae9f9f1dd10a8910ff99d1dd4ec227",
"text": "With the advent of powerful ranging and visual sensors, nowadays, it is convenient to collect sparse 3-D point clouds and aligned high-resolution images. Benefitted from such convenience, this letter proposes a joint method to perform both depth assisted object-level image segmentation and image guided depth upsampling. To this end, we formulate these two tasks together as a bi-task labeling problem, defined in a Markov random field. An alternating direction method (ADM) is adopted for the joint inference, solving each sub-problem alternatively. More specifically, the sub-problem of image segmentation is solved by Graph Cuts, which attains discrete object labels efficiently. Depth upsampling is addressed via solving a linear system that recovers continuous depth values. By this joint scheme, robust object segmentation results and high-quality dense depth maps are achieved. The proposed method is applied to the challenging KITTI vision benchmark suite, as well as the Leuven dataset for validation. Comparative experiments show that our method outperforms stand-alone approaches.",
"title": ""
},
{
"docid": "0e98010ded0712ab0e2f78af0a476c86",
"text": "This paper presents a system that uses symbolic representations of audio concepts as words for the descriptions of audio tracks, that enable it to go beyond the state of the art, which is audio event classification of a small number of audio classes in constrained settings, to large-scale classification in the wild. These audio words might be less meaningful for an annotator but they are descriptive for computer algorithms. We devise a random-forest vocabulary learning method with an audio word weighting scheme based on TF-IDF and TD-IDD, so as to combine the computational simplicity and accurate multi-class classification of the random forest with the data-driven discriminative power of the TF-IDF/TD-IDD methods. The proposed random forest clustering with text-retrieval methods significantly outperforms two state-of-the-art methods on the dry-run set and the full set of the TRECVID MED 2010 dataset.",
"title": ""
},
{
"docid": "4ed47f48df37717148d985ad927b813f",
"text": "Given an incorrect value produced during a failed program run (e.g., a wrong output value or a value that causes the program to crash), the backward dynamic slice of the value very frequently captures the faulty code responsible for producing the incorrect value. Although the dynamic slice often contains only a small percentage of the statements executed during the failed program run, the dynamic slice can still be large and thus considerable effort may be required by the programmer to locate the faulty code.In this paper we develop a strategy for pruning the dynamic slice to identify a subset of statements in the dynamic slice that are likely responsible for producing the incorrect value. We observe that some of the statements used in computing the incorrect value may also have been involved in computing correct values (e.g., a value produced by a statement in the dynamic slice of the incorrect value may also have been used in computing a correct output value prior to the incorrect value). For each such executed statement in the dynamic slice, using the value profiles of the executed statements, we compute a confidence value ranging from 0 to 1 - a higher confidence value corresponds to greater likelihood that the execution of the statement produced a correct value. Given a failed run involving execution of a single error, we demonstrate that the pruning of a dynamic slice by excluding only the statements with the confidence value of 1 is highly effective in reducing the size of the dynamic slice while retaining the faulty code in the slice. Our experiments show that the number of distinct statements in a pruned dynamic slice are 1.79 to 190.57 times less than the full dynamic slice. Confidence values also prioritize the statements in the dynamic slice according to the likelihood of them being faulty. We show that examining the statements in the order of increasing confidence values is an effective strategy for reducing the effort of fault location.",
"title": ""
},
{
"docid": "97fa48d92c4a1b9d2bab250d5383173c",
"text": "This paper presents a new type of axial flux motor, the yokeless and segmented armature (YASA) topology. The YASA motor has no stator yoke, a high fill factor and short end windings which all increase torque density and efficiency of the machine. Thus, the topology is highly suited for high performance applications. The LIFEcar project is aimed at producing the world's first hydrogen sports car, and the first YASA motors have been developed specifically for the vehicle. The stator segments have been made using powdered iron material which enables the machine to be run up to 300 Hz. The iron in the stator of the YASA motor is dramatically reduced when compared to other axial flux motors, typically by 50%, causing an overall increase in torque density of around 20%. A detailed Finite Element analysis (FEA) analysis of the YASA machine is presented and it is shown that the motor has a peak efficiency of over 95%.",
"title": ""
},
{
"docid": "7c75c77802045cfd8d89c73ca8a68ce6",
"text": "The results of the 2016 Brexit referendum in the U.K. and presidential election in the U.S. surprised pollsters and traditional media alike, and social media is now being blamed in part for creating echo chambers that encouraged the spread of fake news that influenced voters.",
"title": ""
},
{
"docid": "9081cb169f74b90672f84afa526f40b3",
"text": "The paper presents an analysis of the main mechanisms of decryption of SSL/TLS traffic. Methods and technologies for detecting malicious activity in encrypted traffic that are used by leading companies are also considered. Also, the approach for intercepting and decrypting traffic transmitted over SSL/TLS is developed, tested and proposed. The developed approach has been automated and can be used for remote listening of the network, which will allow to decrypt transmitted data in a mode close to real time.",
"title": ""
},
{
"docid": "6f8e441738a0c045a83f0e1efd4e0bbd",
"text": "Irony and humour are just two of many forms of figurative language. Approaches to identify in vast volumes of data such as the internet humorous or ironic statements is important not only from a theoretical view point but also for their potential applicability in social networks or human-computer interactive systems. In this study we investigate the automatic detection of irony and humour in social networks such as Twitter casting it as a classification problem. We propose a rich set of features for text interpretation and representation to train classification procedures. In cross-domain classification experiments our model achieves and improves state-of-the-art",
"title": ""
}
] |
scidocsrr
|
8d5f60dd08e3d1f5fee9bf9912cdc382
|
A deliberate practice account of typing proficiency in everyday typists.
|
[
{
"docid": "420a3d0059a91e78719955b4cc163086",
"text": "The superior skills of experts, such as accomplished musicians and chess masters, can be amazing to most spectators. For example, club-level chess players are often puzzled by the chess moves of grandmasters and world champions. Similarly, many recreational athletes find it inconceivable that most other adults – regardless of the amount or type of training – have the potential ever to reach the performance levels of international competitors. Especially puzzling to philosophers and scientists has been the question of the extent to which expertise requires innate gifts versus specialized acquired skills and abilities. One of the most widely used and simplest methods of gathering data on exceptional performance is to interview the experts themselves. But are experts always capable of describing their thoughts, their behaviors, and their strategies in a manner that would allow less-skilled individuals to understand how the experts do what they do, and perhaps also understand how they might reach expert level through appropriate training? To date, there has been considerable controversy over the extent to which experts are capable of explaining the nature and structure of their exceptional performance. Some pioneering scientists, such as Binet (1893 / 1966), questioned the validity of the experts’ descriptions when they found that some experts gave reports inconsistent with those of other experts. To make matters worse, in those rare cases that allowed verification of the strategy by observing the performance, discrepancies were found between the reported strategies and the observations (Watson, 1913). Some of these discrepancies were explained, in part, by the hypothesis that some processes were not normally mediated by awareness/attention and that the mere act of engaging in self-observation (introspection) during performance changed the content of ongoing thought processes. These problems led most psychologists in first half of the 20th century to reject all types of introspective verbal reports as valid scientific evidence, and they focused almost exclusively on observable behavior (Boring, 1950). In response to the problems with the careful introspective analysis of images and perceptions, investigators such as John B.",
"title": ""
}
] |
[
{
"docid": "d2b45d76e93f07ededbab03deee82431",
"text": "A cordless battery charger will greatly improve the user friendliness of electric vehicles (EVs), accelerating the replacement of traditional internal combustion engine (ICE) vehicles with EVs and improving energy sustainability as a result. Resonant circuits are used for both the power transmitter and receiver of a cordless charger to compensate their coils and improve power transfer efficiency. However, conventional compensation circuit topology is not suitable for application to an EV, which involves very large power, a wide gap between the transmitter and receiver coils, and large horizontal misalignment. This paper proposes a novel compensation circuit topology that has a carefully designed series capacitor added to the parallel resonant circuit of the receiver. The proposed circuit has been implemented and tested on an EV. The simulation and experimental results are presented to show that the circuit can improve the power factor and power transfer efficiency, and as a result, allow a larger gap between the transmitter and receiver coils.",
"title": ""
},
{
"docid": "86ce47260d84ddcf8558a0e5e4f2d76f",
"text": "We present the definition and computational algorithms for a new class of surfaces which are dual to the isosurface produced by the widely used marching cubes (MC) algorithm. These new isosurfaces have the same separating properties as the MC surfaces but they are comprised of quad patches that tend to eliminate the common negative aspect of poorly shaped triangles of the MC isosurfaces. Based upon the concept of this new dual operator, we describe a simple, but rather effective iterative scheme for producing smooth separating surfaces for binary, enumerated volumes which are often produced by segmentation algorithms. Both the dual surface algorithm and the iterative smoothing scheme are easily implemented.",
"title": ""
},
{
"docid": "82be3cafe24185b1f3c58199031e41ef",
"text": "UNLABELLED\nFamily-based therapy (FBT) is regarded as best practice for the treatment of eating disorders in children and adolescents. In FBT, parents play a vital role in bringing their child or adolescent to health; however, a significant minority of families do not respond to this treatment. This paper introduces a new model whereby FBT is enhanced by integrating emotion-focused therapy (EFT) principles and techniques with the aims of helping parents to support their child's refeeding and interruption of symptoms. Parents are also supported to become their child's 'emotion coach'; and to process any emotional 'blocks' that may interfere with their ability to take charge of recovery. A parent testimonial is presented to illustrate the integration of the theory and techniques of EFT in the FBT model. EFFT (Emotion-Focused Family Therapy) is a promising model of therapy for those families who require a more intense treatment to bring about recovery of an eating disorder.\n\n\nKEY PRACTITIONER MESSAGE\nMore intense therapeutic models exist for treatment-resistant eating disorders in children and adolescents. Emotion is a powerful healing tool in families struggling with an eating disorder. Working with parent's emotions and emotional reactions to their child's struggles has the potential to improve child outcomes.",
"title": ""
},
{
"docid": "72226ba8d801a3db776cf40d5243c521",
"text": "Hyperspectral image (HSI) classification is one of the most widely used methods for scene analysis from hyperspectral imagery. In the past, many different engineered features have been proposed for the HSI classification problem. In this paper, however, we propose a feature learning approach for hyperspectral image classification based on convolutional neural networks (CNNs). The proposed CNN model is able to learn structured features, roughly resembling different spectral band-pass filters, directly from the hyperspectral input data. Our experimental results, conducted on a commonly-used remote sensing hyperspectral dataset, show that the proposed method provides classification results that are among the state-of-the-art, without using any prior knowledge or engineered features.",
"title": ""
},
{
"docid": "950fe0124f830a63f528aa5905116c82",
"text": "One of the main barriers to immersivity during object manipulation in virtual reality is the lack of realistic haptic feedback. Our goal is to convey compelling interactions with virtual objects, such as grasping, squeezing, pressing, lifting, and stroking, without requiring a bulky, world-grounded kinesthetic feedback device (traditional haptics) or the use of predetermined passive objects (haptic retargeting). To achieve this, we use a pair of finger-mounted haptic feedback devices that deform the skin on the fingertips to convey cutaneous force information from object manipulation. We show that users can perceive differences in virtual object weight and that they apply increasing grasp forces when lifting virtual objects as rendered mass is increased. Moreover, we show how naive users perceive changes of a virtual object's physical properties when we use skin deformation to render objects with varying mass, friction, and stiffness. These studies demonstrate that fingertip skin deformation devices can provide a compelling haptic experience appropriate for virtual reality scenarios involving object manipulation.",
"title": ""
},
{
"docid": "c0d7cd54a947d9764209e905a6779d45",
"text": "The mainstream approach to protecting the location-privacy of mobile users in location-based services (LBSs) is to alter the users' actual locations in order to reduce the location information exposed to the service provider. The location obfuscation algorithm behind an effective location-privacy preserving mechanism (LPPM) must consider three fundamental elements: the privacy requirements of the users, the adversary's knowledge and capabilities, and the maximal tolerated service quality degradation stemming from the obfuscation of true locations. We propose the first methodology, to the best of our knowledge, that enables a designer to find the optimal LPPM for a LBS given each user's service quality constraints against an adversary implementing the optimal inference algorithm. Such LPPM is the one that maximizes the expected distortion (error) that the optimal adversary incurs in reconstructing the actual location of a user, while fulfilling the user's service-quality requirement. We formalize the mutual optimization of user-adversary objectives (location privacy vs. correctness of localization) by using the framework of Stackelberg Bayesian games. In such setting, we develop two linear programs that output the best LPPM strategy and its corresponding optimal inference attack. Our optimal user-centric LPPM can be easily integrated in the users' mobile devices they use to access LBSs. We validate the efficacy of our game theoretic method against real location traces. Our evaluation confirms that the optimal LPPM strategy is superior to a straightforward obfuscation method, and that the optimal localization attack performs better compared to a Bayesian inference attack.",
"title": ""
},
{
"docid": "bdbbe079493bbfec7fb3cb577c926997",
"text": "A large amount of information on the Web is contained in regularly structured objects, which we call data records. Such data records are important because they often present the essential information of their host pages, e.g., lists of products or services. It is useful to mine such data records in order to extract information from them to provide value-added services. Existing automatic techniques are not satisfactory because of their poor accuracies. In this paper, we propose a more effective technique to perform the task. The technique is based on two observations about data records on the Web and a string matching algorithm. The proposed technique is able to mine both contiguous and non-contiguous data records. Our experimental results show that the proposed technique outperforms existing techniques substantially.",
"title": ""
},
{
"docid": "6717e438376a78cb177bfc3942b6eec6",
"text": "Decisions are often guided by generalizing from past experiences. Fundamental questions remain regarding the cognitive and neural mechanisms by which generalization takes place. Prior data suggest that generalization may stem from inference-based processes at the time of generalization. By contrast, generalization may emerge from mnemonic processes occurring while premise events are encoded. Here, participants engaged in a two-phase learning and generalization task, wherein they learned a series of overlapping associations and subsequently generalized what they learned to novel stimulus combinations. Functional MRI revealed that successful generalization was associated with coupled changes in learning-phase activity in the hippocampus and midbrain (ventral tegmental area/substantia nigra). These findings provide evidence for generalization based on integrative encoding, whereby overlapping past events are integrated into a linked mnemonic representation. Hippocampal-midbrain interactions support the dynamic integration of experiences, providing a powerful mechanism for building a rich associative history that extends beyond individual events.",
"title": ""
},
{
"docid": "b0727e320a1c532bd3ede4fd892d8d01",
"text": "Semantic technologies could facilitate realizing features like interoperability and reasoning for Internet of Things (IoT). However, the dynamic and heterogeneous nature of IoT data, constrained resources, and real-time requirements set challenges for applying these technologies. In this paper, we study approaches for delivering semantic data from IoT nodes to distributed reasoning engines and reasoning over such data. We perform experiments to evaluate the scalability of these approaches and also study how reasoning is affected by different data aggregation strategies.",
"title": ""
},
{
"docid": "932c66caf9665e9dea186732217d4313",
"text": "Citations are very important parameters and are used to take many important decisions like ranking of researchers, institutions, countries, and to measure the relationship between research papers. All of these require accurate counting of citations and their occurrence (in-text citation counts) within the citing papers. Citation anchors refer to the citation made within the full text of the citing paper for example: ‘[1]’, ‘(Afzal et al, 2015)’, ‘[Afzal, 2015]’ etc. Identification of citation-anchors from the plain-text is a very challenging task due to the various styles and formats of citations. Recently, Shahid et al. highlighted some of the problems such as commonality in content, wrong allotment, mathematical ambiguities, and string variations etc in automatically identifying the in-text citation frequencies. The paper proposes an algorithm, CAD, for identification of citation-anchors and its in-text citation frequency based on different rules. For a comprehensive analysis, the dataset of research papers is prepared: on both Journal of Universal Computer Science (J.UCS) and (2) CiteSeer digital libraries. In experimental study, we conducted two experiments. In the first experiment, the proposed approach is compared with state-of-the-art technique over both datasets. The J.UCS dataset consists of 1200 research papers with 16,000 citation strings or references while the CiteSeer dataset consists of 52 research papers with 1850 references. The total dataset size becomes 1252 citing documents and 17,850 references. The experiments showed that CAD algorithm improved F-score by 44% and 37% respectively on both J.UCS and CiteSeer dataset over the contemporary technique (Shahid et al. in Int J Arab Inf Technol 12:481–488, 2014). The average score is 41% on both datasets. In the second experiment, the proposed approach is further analyzed against the existing state-of-the-art tools: CERMINE and GROBID. According to our results, the proposed approach is best performing with F1 of 0.99, followed by GROBID (F1 0.89) and CERMINE (F1 0.82).",
"title": ""
},
{
"docid": "f2d1f05292ddb0df8fa92fe1992852ab",
"text": "In this paper, we study the design of omnidirectional mobile robots with Active-Caster RObotic drive with BAll Transmission (ACROBAT). ACROBAT system has been developed by the authors group which realizes mechanical coordination of wheel and steering motions for creating caster behaviors without computer calculations. A motion in the specific direction relative to a robot body is fully depends on the motion of a specific motor. This feature gives a robot designer to build an omnidirectional mobile robot propelled by active-casters with no redundant actuation with a simple control. A controller of the robot becomes as simple as that for omni-wheeled robotic bases. Namely 3DOF of the omnidirectional robot is controlled by three motors using a simple and constant kinematics. ACROBAT includes a unique dual-ball transmission to transmit traction power to rotate and orient a drive wheel with distributing velocity components to wheel and steering axes in an appropriate ratio. Therefore a sensor for measuring a wheel orientation and calculations for velocity distributions are totally removed from a conventional control system. To build an omnidirectional vehicle by ACROBAT, the significant feature is some multiple drive shafts can be driven by a common motor which realizes non-redundant actuation of the robotic platform. A kinematic model of the proposed robot with ACROBAT is analyzed and a mechanical condition for realizing a non-redundant actuation is derived. Based on the kinematic model and the mechanical condition, computer simulations of the mechanism are performed. A prototype two-wheeled robot with two ACROBATs is designed and built to verify the availability of the proposed system. In the experiments, the prototype robot shows successful omnidirectional motions with a simple and constant kinematics based control.",
"title": ""
},
{
"docid": "4d0b04f546ab5c0d79bb066b1431ff51",
"text": "In this paper, we present an extraction and characterization methodology which allows for the determination, from S-parameter measurements, of the threshold voltage, the gain factor, and the mobility degradation factor, neither requiring data regressions involving multiple devices nor DC measurements. This methodology takes into account the substrate effects occurring in MOSFETs built in bulk technology so that physically meaningful parameters can be obtained. Furthermore, an analysis of the substrate impedance is presented, showing that this parasitic component not only degrades the performance of a microwave MOSFET, but may also lead to determining unrealistic values for the model parameters when not considered during a high-frequency characterization process. Measurements were made on transistors of different lengths, the shortest being 80 nm, in the 10 MHz to 40 GHz frequency range. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6b1fbc91a501ea25c7d3d20780a2be74",
"text": "STUDY DESIGN\nA systematic quantitative review of the literature.\n\n\nOBJECTIVE\nTo compare combined anterior-posterior surgery versus posterior surgery for thoracolumbar fractures in order to identify better treatments.\n\n\nSUMMARY OF BACKGROUND DATA\nAxial load of the anterior and middle column of the spine can lead to a burst fracture in the vertebral body. The management of thoracolumbar burst fractures remains controversial. The goals of operative treatment are fracture reduction, fixation and decompressing the neural canal. For this, different operative methods are developed, for instance, the posterior and the combined anterior-posterior approach. Recent systematic qualitative reviews comparing these methods are lacking.\n\n\nMETHODS\nWe conducted an electronic search of MEDLINE, EMBASE, LILACS and the Cochrane Central Register for Controlled Trials.\n\n\nRESULTS\nFive observational comparative studies and no randomized clinical trials comparing the combined anteriorposterior approach with the posterior approach were retrieved. The total enrollment of patients in these studies was 755 patients. The results were expressed as relative risk (RR) for dichotomous outcomes and weighted mean difference (WMD) for continuous outcomes with 95% confidence intervals (CI).\n\n\nCONCLUSIONS\nA small significantly higher kyphotic correction and improvement of vertebral height (sagittal index) observed for the combined anterior-posterior group is cancelled out by more blood loss, longer operation time, longer hospital stay, higher costs and a possible higher intra- and postoperative complication rate requiring re-operation and the possibility of a worsened Hannover spine score. The surgeons' choices regarding the operative approach are biased: worse cases tended to undergo the combined anterior-posterior approach.",
"title": ""
},
{
"docid": "50795998e83dafe3431c3509b9b31235",
"text": "In this study, the daily movement directions of three frequently traded stocks (GARAN, THYAO and ISCTR) in Borsa Istanbul were predicted using deep neural networks. Technical indicators obtained from individual stock prices and dollar-gold prices were used as features in the prediction. Class labels indicating the movement direction were found using daily close prices of the stocks and they were aligned with the feature vectors. In order to perform the prediction process, the type of deep neural network, Convolutional Neural Network, was trained and the performance of the classification was evaluated by the accuracy and F-measure metrics. In the experiments performed, using both price and dollar-gold features, the movement directions in GARAN, THYAO and ISCTR stocks were predicted with the accuracy rates of 0.61, 0.578 and 0.574 respectively. Compared to using the price based features only, the use of dollar-gold features improved the classification performance.",
"title": ""
},
{
"docid": "2bd5ca4cbb8ef7eea1f7b2762918d18b",
"text": "Deep convolutional neural networks continue to advance the state-of-the-art in many domains as they grow bigger and more complex. It has been observed that many of the parameters of a large network are redundant, allowing for the possibility of learning a smaller network that mimics the outputs of the large network through a process called Knowledge Distillation. We show, however, that standard Knowledge Distillation is not effective for learning small models for the task of pedestrian detection. To improve this process, we introduce a higher-dimensional hint layer to increase information flow. We also estimate the uncertainty in the outputs of the large network and propose a loss function to incorporate this uncertainty. Finally, we attempt to boost the complexity of the small network without increasing its size by using as input hand-designed features that have been demonstrated to be effective for pedestrian detection. For only a 2.8% increase in miss rate, we have succeeded in training a student network that is 8 times faster and 21 times smaller than the teacher network.",
"title": ""
},
{
"docid": "ec130c42c43a2a0ba8f33cd4a5d0082b",
"text": "Support vector machine (SVM) has appeared as a powerful tool for forecasting forex market and demonstrated better performance over other methods, e.g., neural network or ARIMA based model. SVM-based forecasting model necessitates the selection of appropriate kernel function and values of free parameters: regularization parameter and ε– insensitive loss function. In this paper, we investigate the effect of different kernel functions, namely, linear, polynomial, radial basis and spline on prediction error measured by several widely used performance metrics. The effect of regularization parameter is also studied. The prediction of six different foreign currency exchange rates against Australian dollar has been performed and analyzed. Some interesting results are presented.",
"title": ""
},
{
"docid": "207bb3922ad45daa1023b70e1a18baf7",
"text": "The article explains how photo-response nonuniformity (PRNU) of imaging sensors can be used for a variety of important digital forensic tasks, such as device identification, device linking, recovery of processing history, and detection of digital forgeries. The PRNU is an intrinsic property of all digital imaging sensors due to slight variations among individual pixels in their ability to convert photons to electrons. Consequently, every sensor casts a weak noise-like pattern onto every image it takes. This pattern, which plays the role of a sensor fingerprint, is essentially an unintentional stochastic spread-spectrum watermark that survives processing, such as lossy compression or filtering. This tutorial explains how this fingerprint can be estimated from images taken by the camera and later detected in a given image to establish image origin and integrity. Various forensic tasks are formulated as a two-channel hypothesis testing problem approached using the generalized likelihood ratio test. The performance of the introduced forensic methods is briefly illustrated on examples to give the reader a sense of the performance.",
"title": ""
},
{
"docid": "c5d74c69c443360d395a8371055ef3e2",
"text": "The supply of oxygen and nutrients and the disposal of metabolic waste in the organs depend strongly on how blood, especially red blood cells, flow through the microvascular network. Macromolecular plasma proteins such as fibrinogen cause red blood cells to form large aggregates, called rouleaux, which are usually assumed to be disaggregated in the circulation due to the shear forces present in bulk flow. This leads to the assumption that rouleaux formation is only relevant in the venule network and in arterioles at low shear rates or stasis. Thanks to an excellent agreement between combined experimental and numerical approaches, we show that despite the large shear rates present in microcapillaries, the presence of either fibrinogen or the synthetic polymer dextran leads to an enhanced formation of robust clusters of red blood cells, even at haematocrits as low as 1%. Robust aggregates are shown to exist in microcapillaries even for fibrinogen concentrations within the healthy physiological range. These persistent aggregates should strongly affect cell distribution and blood perfusion in the microvasculature, with putative implications for blood disorders even within apparently asymptomatic subjects.",
"title": ""
},
{
"docid": "b5dc5268c2eb3b216aa499a639ddfbf9",
"text": "This paper describes a self-localization for indoor mobile robots based on integrating measurement values from multiple optical mouse sensors and a global camera. This paper consists of two parts. Firstly, we propose a dead-reckoning based on increments of the robot movements read directly from the floor using optical mouse sensors. Since the measurement values from multiple optical mouse sensors are compared to each other and only the reliable values are selected, accurate dead-reckoning can be realized compared with the conventional method based on increments of wheel rotations. Secondly, in order to realize robust localization, we propose a method of estimating position and orientation by integrating measured robot position (orientation information is not included) via global camera and dead-reckoning with the Kalman filter",
"title": ""
},
{
"docid": "e37f707ac7a86f287fbbfe9b8a4b1e31",
"text": "We survey distributed deep learning models for training or inference without accessing raw data from clients. These methods aim to protect confidential patterns in data while still allowing servers to train models. The distributed deep learning methods of federated learning, split learning and large batch stochastic gradient descent are compared in addition to private and secure approaches of differential privacy, homomorphic encryption, oblivious transfer and garbled circuits in the context of neural networks. We study their benefits, limitations and trade-offs with regards to computational resources, data leakage and communication efficiency and also share our anticipated future trends.",
"title": ""
}
] |
scidocsrr
|
c2998fed4e899382b5d39ff452daddc4
|
REINFORCED CONCRETE WALL RESPONSE UNDER UNI-AND BI-DIRECTIONAL LOADING
|
[
{
"docid": "7a06c1b73662a377875da0ea2526c610",
"text": "a Earthquake Engineering and Structural Dynamics Laboratory (EESD), School of Architecture, Civil and Environmental Engineering (ENAC), École Polytechnique Fédérale de Lausanne (EPFL), EPFL ENAC IIC EESD, GC B2 515, Station 18, CH – 1015 Lausanne, Switzerland b Earthquake Engineering and Structural Dynamics Laboratory (EESD), School of Architecture, Civil and Environmental Engineering (ENAC), École Polytechnique Fédérale de Lausanne (EPFL), EPFL ENAC IIC EESD, GC B2 504, Station 18, CH – 1015 Lausanne, Switzerland",
"title": ""
}
] |
[
{
"docid": "4b7e71b412770cbfe059646159ec66ca",
"text": "We present empirical evidence to demonstrate that there is little or no difference between the Java Virtual Machine and the .NET Common Language Runtime, as regards the compilation and execution of object-oriented programs. Then we give details of a case study that proves the superiority of the Common Language Runtime as a target for imperative programming language compilers (in particular GCC).",
"title": ""
},
{
"docid": "76f9b2059a99eb9cc1ed2d9dc5686724",
"text": "This paper surveys the results of various studies on 3-D image coding. Themes are focused on efficient compression and display-independent representation of 3-D images. Most of the works on 3-D image coding have been concentrated on the compression methods tuned for each of the 3-D image formats (stereo pairs, multi-view images, volumetric images, holograms and so on). For the compression of stereo images, several techniques concerned with the concept of disparity compensation have been developed. For the compression of multi-view images, the concepts of disparity compensation and epipolar plane image (EPI) are the efficient ways of exploiting redundancies between multiple views. These techniques, however, heavily depend on the limited camera configurations. In order to consider many other multi-view configurations and other types of 3-D images comprehensively, more general platform for the 3-D image representation is introduced, aiming to outgrow the framework of 3-D “image” communication and to open up a novel field of technology, which should be called the “spatial” communication. Especially, the light ray based method has a wide range of application, including efficient transmission of the physical world, as well as integration of the virtual and physical worlds. key words: 3-D image coding, stereo images, multi-view images, panoramic images, volumetric images, holograms, displayindependent representation, light rays, spatial communication",
"title": ""
},
{
"docid": "9490f117f153a16152237a5a6b08c0a3",
"text": "Evidence from macaque monkey tracing studies suggests connectivity-based subdivisions within the precuneus, offering predictions for similar subdivisions in the human. Here we present functional connectivity analyses of this region using resting-state functional MRI data collected from both humans and macaque monkeys. Three distinct patterns of functional connectivity were demonstrated within the precuneus of both species, with each subdivision suggesting a discrete functional role: (i) the anterior precuneus, functionally connected with the superior parietal cortex, paracentral lobule, and motor cortex, suggesting a sensorimotor region; (ii) the central precuneus, functionally connected to the dorsolateral prefrontal, dorsomedial prefrontal, and multimodal lateral inferior parietal cortex, suggesting a cognitive/associative region; and (iii) the posterior precuneus, displaying functional connectivity with adjacent visual cortical regions. These functional connectivity patterns were differentiated from the more ventral networks associated with the posterior cingulate, which connected with limbic structures such as the medial temporal cortex, dorsal and ventromedial prefrontal regions, posterior lateral inferior parietal regions, and the lateral temporal cortex. Our findings are consistent with predictions from anatomical tracer studies in the monkey, and provide support that resting-state functional connectivity (RSFC) may in part reflect underlying anatomy. These subdivisions within the precuneus suggest that neuroimaging studies will benefit from treating this region as anatomically (and thus functionally) heterogeneous. Furthermore, the consistency between functional connectivity networks in monkeys and humans provides support for RSFC as a viable tool for addressing cross-species comparisons of functional neuroanatomy.",
"title": ""
},
{
"docid": "fc62b094df3093528c6846e405f55e39",
"text": "Correctly classifying a skin lesion is one of the first steps towards treatment. We propose a novel convolutional neural network (CNN) architecture for skin lesion classification designed to learn based on information from multiple image resolutions while leveraging pretrained CNNs. While traditional CNNs are generally trained on a single resolution image, our CNN is composed of multiple tracts, where each tract analyzes the image at a different resolution simultaneously and learns interactions across multiple image resolutions using the same field-of-view. We convert a CNN, pretrained on a single resolution, to work for multi-resolution input. The entire network is fine-tuned in a fully learned end-to-end optimization with auxiliary loss functions. We show how our proposed novel multi-tract network yields higher classification accuracy, outperforming state-of-the-art multi-scale approaches when compared over a public skin lesion dataset.",
"title": ""
},
{
"docid": "c7405ff209148bcba4283e57c91f63f9",
"text": "Differential search algorithm (DS) is a relatively new evolutionary algorithm inspired by the Brownian-like random-walkmovement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS) is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.",
"title": ""
},
{
"docid": "0cf9ef0e5e406509f35c0dcd7ea598af",
"text": "This paper proposes a method to reduce cogging torque of a single side Axial Flux Permanent Magnet (AFPM) motor according to analysis results of finite element analysis (FEA) method. First, the main cause of generated cogging torque will be studied using three dimensional FEA method. In order to reduce the cogging torque, a dual layer magnet step skewed (DLMSS) method is proposed to determine the shape of dual layer magnets. The skewed angle of magnetic poles between these two layers is determined using equal air gap flux of inner and outer layers. Finally, a single-sided AFPM motor based on the proposed methods is built as experimental platform to verify the effectiveness of the design. Meanwhile, the differences between design and tested results will be analyzed for future research and improvement.",
"title": ""
},
{
"docid": "4016ad494a953023f982b8a4876bc8c1",
"text": "Visual tracking is one of the most important field of computer vision. It has immense number of applications ranging from surveillance to hi-fi military applications. This paper is based on the application developed for automatic visual tracking and fire control system for anti-aircraft machine gun (AAMG). Our system mainly consists of camera, as visual sensor; mounted on a 2D-moving platform attached with 2GHz embedded system through RS-232 and AAMG mounted on the same moving platform. Camera and AAMG are both bore-sighted. Correlation based template matching algorithm has been used for automatic visual tracking. This is the algorithm used in civilian and military automatic target recognition, surveillance and tracking systems. The algorithm does not give robust performance in different environments, especially in clutter and obscured background, during tracking. So, motion and prediction algorithms have been integrated with it to achieve robustness and better performance for real-time tracking. Visual tracking is also used to calculate lead angle, which is a vital component of such fire control systems. Lead is angular correction needed to compensate for the target motion during the time of flight of the projectile, to accurately hit the target. Although at present lead computation is not robust due to some limitation as lead calculation mostly relies on gunner intuition. Even then by the integrated implementation of lead angle with visual tracking and control algorithm for moving platform, we have been able to develop a system which detects tracks and destroys the target of interest.",
"title": ""
},
{
"docid": "12f717b4973a5290233d6f03ba05626b",
"text": "We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data.",
"title": ""
},
{
"docid": "002f49b0aa994b286a106d6b75ec8b2a",
"text": "We introduce a library of geometric voxel features for CAD surface recognition/retrieval tasks. Our features include local versions of the intrinsic volumes (the usual 3D volume, surface area, integrated mean and Gaussian curvature) and a few closely related quantities. We also compute Haar wavelet and statistical distribution features by aggregating raw voxel features. We apply our features to object classification on the ESB data set and demonstrate accurate results with a small number of shallow decision trees.",
"title": ""
},
{
"docid": "8cddb1fed30976de82d62de5066a5ce6",
"text": "Today, more and more people have their virtual identities on the web. It is common that people are users of more than one social network and also their friends may be registered on multiple websites. A facility to aggregate our online friends into a single integrated environment would enable the user to keep up-to-date with their virtual contacts more easily, as well as to provide improved facility to search for people across different websites. In this paper, we propose a method to identify users based on profile matching. We use data from two popular social networks to study the similarity of profile definition. We evaluate the importance of fields in the web profile and develop a profile comparison tool. We demonstrate the effectiveness and efficiency of our tool in identifying and consolidating duplicated users on different websites.",
"title": ""
},
{
"docid": "482bc3d151948bad9fbfa02519fbe61a",
"text": "Evolution has resulted in highly developed abilities in many natural intelligences to quickly and accurately predict mechanical phenomena. Humans have successfully developed laws of physics to abstract and model such mechanical phenomena. In the context of artificial intelligence, a recent line of work has focused on estimating physical parameters based on sensory data and use them in physical simulators to make long-term predictions. In contrast, we investigate the effectiveness of a single neural network for end-to-end long-term prediction of mechanical phenomena. Based on extensive evaluation, we demonstrate that such networks can outperform alternate approaches having even access to ground-truth physical simulators, especially when some physical parameters are unobserved or not known a-priori. Further, our network outputs a distribution of outcomes to capture the inherent uncertainty in the data. Our approach demonstrates for the first time the possibility of making actionable long-term predictions from sensor data without requiring to explicitly model the underlying physical laws.",
"title": ""
},
{
"docid": "dfb83ad16854797137e34a5c7cb110ae",
"text": "The increasing computing requirements for GPUs (Graphics Processing Units) have favoured the design and marketing of commodity devices that nowadays can also be used to accelerate general purpose computing. Therefore, future high performance clusters intended for HPC (High Performance Computing) will likely include such devices. However, high-end GPU-based accelerators used in HPC feature a considerable energy consumption, so that attaching a GPU to every node of a cluster has a strong impact on its overall power consumption. In this paper we detail a framework that enables remote GPU acceleration in HPC clusters, thus allowing a reduction in the number of accelerators installed in the cluster. This leads to energy, acquisition, maintenance, and space savings.",
"title": ""
},
{
"docid": "b73526f1fb0abb4373421994dbd07822",
"text": "in our country around 2.78% of peoples are not able to speak (dumb). Their communications with others are only using the motion of their hands and expressions. We proposed a new technique called artificial speaking mouth for dumb people. It will be very helpful to them for conveying their thoughts to others. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dumb peoples. This system is based on the motion sensor. According to dumb people, for every motion they have a meaning. That message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal. The output of the system is using the speaker. By properly updating the database the dumb will speak like a normal person using the artificial mouth. The system also includes a text to speech conversion (TTS) block that interprets the matched gestures.",
"title": ""
},
{
"docid": "12b115e3b759fcb87956680d6e89d7aa",
"text": "The calibration system presented in this article enables to calculate optical parameters i.e. intrinsic and extrinsic of both thermal and visual cameras used for 3D reconstruction of thermal images. Visual cameras are in stereoscopic set and provide a pair of stereo images of the same object which are used to perform 3D reconstruction of the examined object [8]. The thermal camera provides information about temperature distribution on the surface of an examined object. In this case the term of 3D reconstruction refers to assigning to each pixel of one of the stereo images (called later reference image) a 3D coordinate in the respective camera reference frame [8]. The computed 3D coordinate is then re-projected on to the thermograph and thus to the known 3D position specific temperature is assigned. In order to remap the 3D coordinates on to thermal image it is necessary to know the position of thermal camera against visual camera and therefore a calibration of the set of the three cameras must be performed. The presented calibration system includes special calibration board (fig.1) whose characteristic points of well known position are recognizable both by thermal and visual cameras. In order to detect calibration board characteristic points’ image coordinates, especially in thermal camera, a new procedure was designed.",
"title": ""
},
{
"docid": "79465d290ab299b9d75e9fa617d30513",
"text": "In this paper we describe computational experience in solving unconstrained quadratic zero-one problems using a branch and bound algorithm. The algorithm incorporates dynamic preprocessing techniques for forcing variables and heuristics to obtain good starting points. Computational results and comparisons with previous studies on several hundred test problems with dimensions up to 200 demonstrate the efficiency of our algorithm. In dieser Arbeit beschreiben wir rechnerische Erfahrungen bei der Lösung von unbeschränkten quadratischen Null-Eins-Problemen mit einem “Branch and Bound”-Algorithmus. Der Algorithmus erlaubt dynamische Vorbereitungs-Techniken zur Erzwingung ausgewählter Variablen und Heuristiken zur Wahl von guten Startpunkten. Resultate von Berechnungen und Vergleiche mit früheren Arbeiten mit mehreren hundert Testproblemen mit Dimensionen bis 200 zeigen die Effizienz unseres Algorithmus.",
"title": ""
},
{
"docid": "e112af9e35690b64acc7242611b39dd2",
"text": "Body sensor network systems can help people by providing healthcare services such as medical monitoring, memory enhancement, medical data access, and communication with the healthcare provider in emergency situations through the SMS or GPRS [1,2]. Continuous health monitoring with wearable [3] or clothing-embedded transducers [4] and implantable body sensor networks [5] will increase detection of emergency conditions in at risk patients. Not only the patient, but also their families will benefit from these. Also, these systems provide useful methods to remotely acquire and monitor the physiological signals without the need of interruption of the patient’s normal life, thus improving life quality [6,7].",
"title": ""
},
{
"docid": "9121462cf9ac2b2c55b7a1c96261472f",
"text": "The main goal of this chapter is to give characteristics, evaluation methodologies, and research examples of collaborative augmented reality (AR) systems from a perspective of human-to-human communication. The chapter introduces classifications of conventional and 3D collaborative systems as well as typical characteristics and application examples of collaborative AR systems. Next, it discusses design considerations of collaborative AR systems from a perspective of human communication and then discusses evaluation methodologies of human communication behaviors. The next section discusses a variety of collaborative AR systems with regard to display devices used. Finally, the chapter gives conclusion with future directions. This will be a good starting point to learn existing collaborative AR systems, their advantages and limitations. This chapter will also contribute to the selection of appropriate hardware configurations and software designs of a collaborative AR system for given conditions.",
"title": ""
},
{
"docid": "5cd8ee9a938ed087e2a3bc667991557d",
"text": "Expense reimbursement is a time-consuming and labor-intensive process across organizations. In this paper, we present a prototype expense reimbursement system that dramatically reduces the elapsed time and costs involved, by eliminating paper from the process life cycle. Our complete solution involves (1) an electronic submission infrastructure that provides multi- channel image capture, secure transport and centralized storage of paper documents; (2) an unconstrained data mining approach to extracting relevant named entities from un-structured document images; (3) automation of auditing procedures that enables automatic expense validation with minimum human interaction.\n Extracting relevant named entities robustly from document images with unconstrained layouts and diverse formatting is a fundamental technical challenge to image-based data mining, question answering, and other information retrieval tasks. In many applications that require such capability, applying traditional language modeling techniques to the stream of OCR text does not give satisfactory result due to the absence of linguistic context. We present an approach for extracting relevant named entities from document images by combining rich page layout features in the image space with language content in the OCR text using a discriminative conditional random field (CRF) framework. We integrate this named entity extraction engine into our expense reimbursement solution and evaluate the system performance on large collections of real-world receipt images provided by IBM World Wide Reimbursement Center.",
"title": ""
},
{
"docid": "4775bf71a5eea05b77cafa53daefcff9",
"text": "There is mounting empirical evidence that interacting with nature delivers measurable benefits to people. Reviews of this topic have generally focused on a specific type of benefit, been limited to a single discipline, or covered the benefits delivered from a particular type of interaction. Here we construct novel typologies of the settings, interactions and potential benefits of people-nature experiences, and use these to organise an assessment of the benefits of interacting with nature. We discover that evidence for the benefits of interacting with nature is geographically biased towards high latitudes and Western societies, potentially contributing to a focus on certain types of settings and benefits. Social scientists have been the most active researchers in this field. Contributions from ecologists are few in number, perhaps hindering the identification of key ecological features of the natural environment that deliver human benefits. Although many types of benefits have been studied, benefits to physical health, cognitive performance and psychological well-being have received much more attention than the social or spiritual benefits of interacting with nature, despite the potential for important consequences arising from the latter. The evidence for most benefits is correlational, and although there are several experimental studies, little as yet is known about the mechanisms that are important for delivering these benefits. For example, we do not know which characteristics of natural settings (e.g., biodiversity, level of disturbance, proximity, accessibility) are most important for triggering a beneficial interaction, and how these characteristics vary in importance among cultures, geographic regions and socio-economic groups. These are key directions for future research if we are to design landscapes that promote high quality interactions between people and nature in a rapidly urbanising world.",
"title": ""
},
{
"docid": "d1eed1d7875930865944c98fbab5f7e1",
"text": "Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and “gold standard” is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.",
"title": ""
}
] |
scidocsrr
|
5e61482d464967af202c8ba51112ae7d
|
An Image Steganography Scheme Using 3 D-Sudoku
|
[
{
"docid": "8c8a100e4dc69e1e68c2bd55f010656d",
"text": "In this paper, a data hiding scheme by simple LSB substitution is proposed. By applying an optimal pixel adjustment process to the stego-image obtained by the simple LSB substitution method, the image quality of the stego-image can be greatly improved with low extra computational complexity. The worst case mean-square-error between the stego-image and the cover-image is derived. Experimental results show that the stego-image is visually indistinguishable from the original cover-image. The obtained results also show a signi7cant improvement with respect to a previous work. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "3d9c02413c80913cb32b5094dcf61843",
"text": "There is an explosion of youth subscriptions to original content-media-sharing Web sites such as YouTube. These Web sites combine media production and distribution with social networking features, making them an ideal place to create, connect, collaborate, and circulate. By encouraging youth to become media creators and social networkers, new media platforms such as YouTube offer a participatory culture in which youth can develop, interact, and learn. As youth development researchers, we must be cognizant of this context and critically examine what this platform offers that might be unique to (or redundant of) typical adolescent experiences in other developmental contexts.",
"title": ""
},
{
"docid": "c9faca5c8c5a0e7e7630e1e445c186a3",
"text": "We report the results of a study on students’ interest in physics at the end of their compulsory schooling in Israel carried out in the framework of the ROSE Project. Factors studied were their opinions about science classes, their out-of-school experiences in physics, and their attitudes toward science and technology. Students’ overall interest in physics was “neutral” (neither positive nor negative), with boys showing a higher interest than girls. We found a strong correlation between students’ “neutral” interest in physics and their negative opinions about science classes. These findings raise serious questions about the implementation of changes made in the Israeli science curriculum in primary and junior high school, especially if the goal is to prepare the young generation for life in a scientific-technological era. A more in-depth analysis of the results led us to formulate curricular, behavioral, and organizational changes needed to reach this goal.",
"title": ""
},
{
"docid": "1bf69a2bffe2652e11ff8ec7f61b7c0d",
"text": "This research proposes and validates a design theory for digital platforms that support online communities (DPsOC). It addresses ways in which digital platforms can effectively support social interactions in online communities. Drawing upon prior literature on IS design theory, online communities, and platforms, we derive an initial set of propositions for designing effective DPsOC. Our overarching proposition is that three components of digital platform architecture (core, interface, and complements) should collectively support the mix of the three distinct types of social interaction structures of online community (information sharing, collaboration, and collective action). We validate the initial propositions and generate additional insights by conducting an in-depth analysis of an European digital platform for elderly care assistance. We further validate the propositions by analyzing three widely used digital platforms, including Twitter, Wikipedia, and Liquidfeedback, and we derive additional propositions and insights that can guide DPsOC design. We discuss the implications of this research for research and practice. Journal of Information Technology advance online publication, 10 February 2015; doi:10.1057/jit.2014.37",
"title": ""
},
{
"docid": "25f2763cd7c71cadf8f86f042841cd48",
"text": "This study investigated to design the trajectory tracking controller for wheeled inverted pendulum robot using tilt angle control. It introduced as follows; 3DOF model of wheeled inverted pendulum robot was derived by Lagrangian multiplier method. The trajectory tracking algorithm was redesigned in order to track the trajectory by forward and backward motions. The update algorithm is used to move the way point, which is desired position for servo controller. The tilt angle control was designed to control the tilt angle and the robot's motion. Initial condition and simulation result, which were described the initial condition for simulating the motion and the result of the trajectory tracking using the tilt angle control. The result exhibited that the robot can performs to track the trajectory well. However, some errors might be occurred when the robot performs the steering motion but the robot has a function able to perform to the goal successfully.",
"title": ""
},
{
"docid": "5ce8a143ccb977917df41b93de16aa40",
"text": "The graduated optimization approach, also known as the continuation method, is a popular heuristic to solving non-convex problems that has received renewed interest over the last decade. Despite being popular, very little is known in terms of its theoretical convergence analysis. In this paper we describe a new first-order algorithm based on graduated optimization and analyze its performance. We characterize a family of non-convex functions for which this algorithm provably converges to a global optimum. In particular, we prove that the algorithm converges to an ε-approximate solution within O(1/ε) gradient-based steps. We extend our algorithm and analysis to the setting of stochastic non-convex optimization with noisy gradient feedback, attaining the same convergence rate. Additionally, we discuss the setting of “zeroorder optimization”, and devise a variant of our algorithm which converges at rate of O(d/ε).",
"title": ""
},
{
"docid": "11d3dc9169c914bfdff66d1d9afddfaf",
"text": "As most modern cryptographic Radio Frequency Identification (RFID) devices are based on ciphers that are secure from a purely theoretical point of view, e.g., (Triple-)DES or AES, adversaries have been adopting new methods to extract secret information and cryptographic keys from contactless smartcards: Side-Channel Analysis (SCA) targets the physical implementation of a cipher and allows to recover secret keys by exploiting a side-channel, for instance, the electro-magnetic (EM) emanation of an Integrated Circuit (IC). In this paper we present an analog demodulator specifically designed for refining the SCA of contactless smartcards. The customized analogue hardware increases the quality of EM measurements, facilitates the processing of the side-channel leakage and can serve as a plug-in component to enhance any existing SCA laboratory. Employing it to obtain power profiles of several real-world cryptographic RFIDs, we demonstrate the effectiveness of our measurement setup and evaluate the improvement of our new analog technique compared to previously proposed approaches. Using the example of the popular Mifare DESFire MF3ICD40 contactless smartcard, we show that commercial RFID devices are susceptible to the proposed SCA methods. The security analyses presented in this paper do not require expensive equipment and demonstrate that SCA poses a severe threat to many real-world systems. This novel attack vector has to be taken into account when employing contactless smartcards in security-sensitive applications, e.g., for wireless payment or identification.",
"title": ""
},
{
"docid": "4371812da8ccd01afcb1d91ce58d930e",
"text": "BACKGROUND AND PURPOSE\nCauda equina syndrome (CES) is a severe complication of lumbar spinal disorders; it results from compression of the nerve roots of the cauda equina. The purpose of this study was to evaluate the clinical usefulness of a classification scheme of CES based on factors including clinical symptoms, imaging signs, and electrophysiological findings.\n\n\nMETHODS\nThe records of 39 patients with CES were divided into 4 groups based on clinical features as follows. Group 1 (preclinical): low back pain with only bulbocavernosus reflex and ischiocavernosus reflex abnormalities. Group 2 (early): saddle sensory disturbance and bilateral sciatica. Group 3 (middle): saddle sensory disturbance, bowel or bladder dysfunction, motor weakness of the lower extremity, and reduced sexual function. Group 4 (late): absence of saddle sensation and sexual function in addition to uncontrolled bowel function. The outcome including radiographic and electrophysiological findings was compared between groups.\n\n\nRESULTS\nThe main clinical manifestations of CES included bilateral saddle sensory disturbance, and bowel, bladder, and sexual dysfunction. The clinical symptoms of patients with multiple-segment canal stenosis identified radiographically were more severe than those of patients with single-segment stenosis. BCR and ICR improved in groups 1 and 2 after surgery, but no change was noted for groups 3 and 4.\n\n\nINTERPRETATION\nWe conclude that bilateral radiculopathy or sciatica are early stages of CES and indicate a high risk of development of advanced CES. Electrophysiological abnormalities and reduced saddle sensation are indices of early diagnosis. Patients at the preclinical and early stages have better functional recovery than patients in later stages after surgical decompression.",
"title": ""
},
{
"docid": "3dc4384744f2f85983bc58b0a8a241c6",
"text": "OBJECTIVE\nTo define a map of interradicular spaces where miniscrew can be likely placed at a level covered by attached gingiva, and to assess if a correlation between crowding and availability of space exists.\n\n\nMETHODS\nPanoramic radiographs and digital models of 40 patients were selected according to the inclusion criteria. Interradicular spaces were measured on panoramic radiographs, while tooth size-arch length discrepancy was assessed on digital models. Statistical analysis was performed to evaluate if interradicular spaces are influenced by the presence of crowding.\n\n\nRESULTS\nIn the mandible, the most convenient sites for miniscrew insertion were in the spaces comprised between second molars and first premolars; in the maxilla, between first molars and second premolars as well as between canines and lateral incisors and between the two central incisors. The interradicular spaces between the maxillary canines and lateral incisors, and between mandibular first and second premolars revealed to be influenced by the presence of dental crowding.\n\n\nCONCLUSIONS\nThe average interradicular sites map hereby proposed can be used as a general guide for miniscrew insertion at the very beginning of orthodontic treatment planning. Then, the clinician should consider the amount of crowding: if this is large, the actual interradicular space in some areas might be significantly different from what reported on average. Individualized radiographs for every patient are still recommended.",
"title": ""
},
{
"docid": "ee6fd377c464ac76562f8e7b82b9d2c9",
"text": "Time series (particularly multivariate) classification has drawn a lot of attention in the literature because of its broad applications for different domains, such as health informatics and bioinformatics. Thus, many algorithms have been developed for this task. Among them, nearest neighbor classification (particularly 1-NN) combined with Dynamic Time Warping (DTW) achieves the state of the art performance. However, when data set grows larger, the time consumption of 1-NN with DTW grows linearly. Compared to 1-NN with DTW, the traditional feature-based classification methods are usually more efficient but less effective since their performance is usually dependent on the quality of hand-crafted features. To that end, in this paper, we explore the feature learning techniques to improve the performance of traditional feature-based approaches. Specifically, we propose a novel deep learning framework for multivariate time series classification. We conduct two groups of experiments on real-world data sets from different application domains. The final results show that our model is not only more efficient than the state of the art but also competitive in accuracy. It also demonstrates that feature learning is worth to investigate for time series classification.",
"title": ""
},
{
"docid": "b622e8a511698116be2b2831e8ea7989",
"text": "BACKGROUND\nThe large and growing number of published studies, and their increasing rate of publication, makes the task of identifying relevant studies in an unbiased way for inclusion in systematic reviews both complex and time consuming. Text mining has been offered as a potential solution: through automating some of the screening process, reviewer time can be saved. The evidence base around the use of text mining for screening has not yet been pulled together systematically; this systematic review fills that research gap. Focusing mainly on non-technical issues, the review aims to increase awareness of the potential of these technologies and promote further collaborative research between the computer science and systematic review communities.\n\n\nMETHODS\nFive research questions led our review: what is the state of the evidence base; how has workload reduction been evaluated; what are the purposes of semi-automation and how effective are they; how have key contextual problems of applying text mining to the systematic review field been addressed; and what challenges to implementation have emerged? We answered these questions using standard systematic review methods: systematic and exhaustive searching, quality-assured data extraction and a narrative synthesis to synthesise findings.\n\n\nRESULTS\nThe evidence base is active and diverse; there is almost no replication between studies or collaboration between research teams and, whilst it is difficult to establish any overall conclusions about best approaches, it is clear that efficiencies and reductions in workload are potentially achievable. On the whole, most suggested that a saving in workload of between 30% and 70% might be possible, though sometimes the saving in workload is accompanied by the loss of 5% of relevant studies (i.e. a 95% recall).\n\n\nCONCLUSIONS\nUsing text mining to prioritise the order in which items are screened should be considered safe and ready for use in 'live' reviews. The use of text mining as a 'second screener' may also be used cautiously. The use of text mining to eliminate studies automatically should be considered promising, but not yet fully proven. In highly technical/clinical areas, it may be used with a high degree of confidence; but more developmental and evaluative work is needed in other disciplines.",
"title": ""
},
{
"docid": "867186860cb323109441cce3d294b905",
"text": "This application note gives guidance on the design of electronic circuits for use with SGX Sensortech electrochemical gas sensors. The information is provided for general advice and care should be taken to adapt the circuits to the particular requirements of the application. By following the recommendations of this application note the user should be able to achieve excellent performance with SGX Sensortech electrochemical gas sensors.",
"title": ""
},
{
"docid": "5a95dd5f369800c6cc4e8651cf63b5cd",
"text": "AIMS AND OBJECTIVES\nTo examine the relationships between social support, maternal parental self-efficacy and postnatal depression in first-time mothers at 6 weeks post delivery.\n\n\nBACKGROUND\nSocial support conceptualised and measured in different ways has been found to positively influence the mothering experience as has maternal parental self-efficacy. No research exists which has measured the relationships between social support, underpinned by social exchange theory and maternal parental self-efficacy using a domain-specific instrument, underpinned by self-efficacy theory and postnatal depression, with first-time mothers at 6 weeks post delivery.\n\n\nDESIGN\nA quantitative correlational descriptive design was used.\n\n\nMETHOD\nData were collected using a five-part questionnaire package containing a researcher developed social support questionnaire, the Perceived Maternal Parental Self-Efficacy Scale and the Edinburgh Postnatal Depression Scale. Four hundred and ten mothers completed questionnaires at 6 weeks post delivery.\n\n\nRESULTS\nSignificant relationships were found between functional social support and postnatal depression; informal social support and postnatal depression; maternal parental self-efficacy and postnatal depression and informal social support and maternal parental self-efficacy at 6 weeks post delivery.\n\n\nCONCLUSION\nNurses and midwives need to be aware of and acknowledge the significant contribution of social support, particularly from family and friends in positively influencing first-time mothers' mental health and well-being in the postpartum period. The development of health care policy and clinical guidelines needs to define and operationalise social support to enhance maternal parental self-efficacy.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThese findings suggest that nurses and midwives need to be cognisant of the importance of social support for first-time mothers in both enhancing maternal parental self-efficacy and reducing postnatal depressive symptomatology in the early postpartum period.",
"title": ""
},
{
"docid": "4102b836c1fefd0f8686d26f12f8e0ca",
"text": "Operative management of unstable burst vertebral fractures is challenging and debatable. This study of such cases was conducted at the Aga Khan Hospital, Karachi from January 1998 to April 2003. All surgically managed spine injuries were reviewed from case notes and operative records. Clinical outcome was assessed by Hanover spine score and correction of kyphosis was measured for radiological assessment. The results were analyzed by Wilcoxon sign rank test for two related samples and p-value < 0.05 was considered significant. Ten patients were identified by inclusion criteria. There was statistically significant difference between mean pre-and postoperative Hanover spine score (p=0.008). Likewise, there was significant difference between mean immediate postoperative and final follow-up kyphosis. (p=0.006). Critical assessment of neurologic and structural extent of injury, proper pre-operative planning and surgical expertise can optimize the outcome of patients.",
"title": ""
},
{
"docid": "645a1d50394e9cf746e88398ca867ad2",
"text": "In this paper, we conduct a comparative analysis of two associative memory-based pattern recognition algorithms. We compare the established Hopfield network algorithm with our novel Distributed Hierarchical Graph Neuron (DHGN) algorithm. The computational complexity and recall efficiency aspects of these algorithms are discussed. The results show that DHGN offers lower computational complexity with better recall efficiency compared to the Hopfield network.",
"title": ""
},
{
"docid": "3388d2e88fdc2db9967da4ddb452d9f1",
"text": "Entity pair provide essential information for identifying relation type. Aiming at this characteristic, Position Feature is widely used in current relation classification systems to highlight the words close to them. However, semantic knowledge involved in entity pair has not been fully utilized. To overcome this issue, we propose an Entity-pair-based Attention Mechanism, which is specially designed for relation classification. Recently, attention mechanism significantly promotes the development of deep learning in NLP. Inspired by this, for specific instance(entity pair, sentence), the corresponding entity pair information is incorporated as prior knowledge to adaptively compute attention weights for generating sentence representation. Experimental results on SemEval-2010 Task 8 dataset show that our method outperforms most of the state-of-the-art models, without external linguistic features.",
"title": ""
},
{
"docid": "d2a0ff28b7163203a03be27977b9b425",
"text": "The various types of shadows are characterized. Most existing shadow algorithms are described, and their complexities, advantages, and shortcomings are discussed. Hard shadows, soft shadows, shadows of transparent objects, and shadows for complex modeling primitives are considered. For each type, shadow algorithms within various rendering techniques are examined. The aim is to provide readers with enough background and insight on the various methods to allow them to choose the algorithm best suited to their needs and to help identify the areas that need more research and point to possible solutions.<<ETX>>",
"title": ""
},
{
"docid": "a5f32f0914578abc477fc6cb3be75486",
"text": "This paper describes a state-of-the-art supervised, knowledge-intensive approach to the automatic identification of semantic relations between nominals in English sentences. The system employs a combination of rich and varied sets of new and previously used lexical, syntactic, and semantic features extracted from various knowledge sources such as WordNet and additional annotated corpora. The system ranked first at the third most popular SemEval 2007 Task – Classification of Semantic Relations between Nominals and achieved an F-measure of 72.4% and an accuracy of 76.3%. We also show that some semantic relations are better suited for WordNet-based models than other relations. Additionally, we make a distinction between out-of-context (regular) examples and those that require sentence context for relation identification and show that contextual data are important for the performance of a noun–noun semantic parser. Finally, learning curves show that the task difficulty varies across relations and that our learned WordNet-based representation is highly accurate so the performance results suggest the upper bound on what this representation can do. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0ca90e8375f777a686a6c9f1a1ed40a3",
"text": "The clustered regularly interspaced short [corrected] palindromic repeats (CRISPR)/CRISPR-associated (Cas) 9 nuclease system has provided a powerful tool for genome engineering. Double strand breaks may trigger nonhomologous end joining repair, leading to frameshift mutations, or homology-directed repair using an extrachromosomal template. Alternatively, genomic deletions may be produced by a pair of double strand breaks. The efficiency of CRISPR/Cas9-mediated genomic deletions has not been systematically explored. Here, we present a methodology for the production of deletions in mammalian cells, ranging from 1.3 kb to greater than 1 Mb. We observed a high frequency of intended genomic deletions. Nondeleted alleles are nonetheless often edited with inversions or small insertion/deletions produced at CRISPR recognition sites. Deleted alleles also typically include small insertion/deletions at predicted deletion junctions. We retrieved cells with biallelic deletion at a frequency exceeding that of probabilistic expectation. We demonstrate an inverse relationship between deletion frequency and deletion size. This work suggests that CRISPR/Cas9 is a robust system to produce a spectrum of genomic deletions to allow investigation of genes and genetic elements.",
"title": ""
},
{
"docid": "3d2e170b4cd31d0e1a28c968f0b75cf6",
"text": "Fog Computing is a new variety of the cloud computing paradigm that brings virtualized cloud services to the edge of the network to control the devices in the IoT. We present a pattern for fog computing which describes its architecture, including its computing, storage and networking services. Fog computing is implemented as an intermediate platform between end devices and cloud computing data centers. The recent popularity of the Internet of Things (IoT) has made fog computing a necessity to handle a variety of devices. It has been recognized as an important platform to provide efficient, location aware, close to the edge, cloud services. Our model includes most of the functionality found in current fog architectures.",
"title": ""
},
{
"docid": "3f6f245213590f940a994a96fc1a7291",
"text": "Google offers several speech features on the Android mobile operating system: search by voice, voice input to any text field, and an API for application developers. As a result, our speech recognition service must support a wide range of usage scenarios and speaking styles: relatively short search queries, addresses, business names, dictated SMS and e-mail messages, and a long tail of spoken input to any of the applications users may install. We present a method of on-demand language model interpolation in which contextual information about each utterance determines interpolation weights among a number of n-gram language models. On-demand interpolation results in an 11.2% relative reduction in WER compared to using a single language model to handle all traffic.",
"title": ""
}
] |
scidocsrr
|
a0d22a863b254dccd516fa63ae9be5e2
|
Electronic word-of-mouth: Challenges and opportunities
|
[
{
"docid": "80ce6c8c9fc4bf0382c5f01d1dace337",
"text": "Customer loyalty is viewed as the strength of the relationship between an individual's relative attitude and repeat patronage. The relationship is seen as mediated by social norms and situational factors. Cognitive, affective, and conative antecedents of relative attitude are identified as contributing to loyalty, along with motivational, perceptual, and behavioral consequences. Implications for research and for the management of loyalty are derived.",
"title": ""
}
] |
[
{
"docid": "8214191a507f7eb2d9c3315e8959c08d",
"text": "This paper addresses issues about the rejection of false jammer targets in the presence of digital radio frequency memory (DRFM) repeat jammer. An anti-jamming filtering technique is proposed that it can eliminate this type of jamming signal. By using a stretch processing with a particular selected reference signal, the presented method can fully separate the echoes being reflected from the true targets and the signals being re-transmitted by a jammer in frequency domain. Therefore, utilizing the nonoverlapping properties of the received signals, filters or suchlike techniques can be used to reject the undesired jamming signals. Particularly, this method does not require estimation of jamming signal parameters and does not involve a great computation burden. Simulations are given to show the validity of the introduced approach.",
"title": ""
},
{
"docid": "f9076f4dbc5789e89ed758d0ad2c6f18",
"text": "This paper presents an innovative manner of obtaining discriminative texture signatures by using the LBP approach to extract additional sources of information from an input image and by using fractal dimension to calculate features from these sources. Four strategies, called Min, Max, Diff Min and Diff Max , were tested, and the best success rates were obtained when all of them were employed together, resulting in an accuracy of 99.25%, 72.50% and 86.52% for the Brodatz, UIUC and USPTex databases, respectively, using Linear Discriminant Analysis. These results surpassed all the compared methods in almost all the tests and, therefore, confirm that the proposed approach is an effective tool for texture analysis. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ab02c4ebc5449a4371e7ebd22fd0db48",
"text": "A number of marketing phenomena are too complex for conventional analytical or empirical approaches. This makes marketing a costly process of trial and error: proposing, imagining, trying in the real world, and seeing results. Alternatively, Agent-based Social Simulation (ABSS) is becoming the most popular approach to model and study these phenomena. This research paradigm allows modeling a virtual market to: design, understand, and evaluate marketing hypotheses before taking them to the real world. However, there are shortcomings in the specialized literature such as the lack of methods, data, and implemented tools to deploy a realistic virtual market with ABSS. To advance the state of the art in this complex and interesting problem, this paper is a seven-fold contribution based on a (1) method to design and validate viral marketing strategies in Twitter by ABSS. The method is illustrated with the widely studied problem of rumor diffusion in social networks. After (2) an extensive review of the related works for this problem, (3) an innovative spread model is proposed which rests on the exploratory data analysis of two different rumor datasets in Twitter. Besides, (4) new strategies are proposed to control malicious gossips. (5) The experimental results validate the realism of this new propagation model with the datasets and (6) the strategies performance is evaluated over this model. (7) Finally, the article is complemented by a free and open-source simulator.",
"title": ""
},
{
"docid": "6761bd757cdd672f60c980b081d4dbc8",
"text": "Real-time eye and iris tracking is important for handsoff gaze-based password entry, instrument control by paraplegic patients, Internet user studies, as well as homeland security applications. In this project, a smart camera, LabVIEW and vision software tools are utilized to generate eye detection and tracking algorithms. The algorithms are uploaded to the smart camera for on-board image processing. Eye detection refers to finding eye features in a single frame. Eye tracking is achieved by detecting the same eye features across multiple image frames and correlating them to a particular eye. The algorithms are tested for eye detection and tracking under different conditions including different angles of the face, head motion speed, and eye occlusions to determine their usability for the proposed applications. This paper presents the implemented algorithms and performance results of these algorithms on the smart camera.",
"title": ""
},
{
"docid": "5179662c841302180848dc566a114f10",
"text": "Hyperspectral image (HSI) unmixing has attracted increasing research interests in recent decades. The major difficulty of it lies in that the endmembers and the associated abundances need to be separated from highly mixed observation data with few a priori information. Recently, sparsity-constrained nonnegative matrix factorization (NMF) algorithms have been proved effective for hyperspectral unmixing (HU) since they can sufficiently utilize the sparsity property of HSIs. In order to improve the performance of NMF-based unmixing approaches, spectral and spatial constrains have been added into the unmixing model, but spectral-spatial joint structure is required to be more accurately estimated. To exploit the property that similar pixels within a small spatial neighborhood have higher possibility to share similar abundances, hypergraph structure is employed to capture the similarity relationship among the spatial nearby pixels. In the construction of a hypergraph, each pixel is taken as a vertex of the hypergraph, and each vertex with its k nearest spatial neighboring pixels form a hyperedge. Using the hypergraph, the pixels with similar abundances can be accurately found, which enables the unmixing algorithm to obtain promising results. Experiments on synthetic data and real HSIs are conducted to investigate the performance of the proposed algorithm. The superiority of the proposed algorithm is demonstrated by comparing it with some state-of-the-art methods.",
"title": ""
},
{
"docid": "fe360177f5a13e4b50489a6a96bead01",
"text": "Previous work on automatic summarization does not thoroughly consider coherence while generating the summary. We introduce a graph-based approach to summarize scientific articles. We employ coherence patterns to ensure that the generated summaries are coherent. The novelty of our model is twofold: we mine coherence patterns in a corpus of abstracts, and we propose a method to combine coherence, importance and non-redundancy to generate the summary. We optimize these factors simultaneously using Mixed Integer Programming. Our approach significantly outperforms baseline and state-of-the-art systems in terms of coherence (summary coherence assessment) and relevance (ROUGE scores).",
"title": ""
},
{
"docid": "f4a2e2cc920e28ae3d7539ba8b822fb7",
"text": "Neurologic injuries, such as stroke, spinal cord injuries, and weaknesses of skeletal muscles with elderly people, may considerably limit the ability of this population to achieve the main daily living activities. Recently, there has been an increasing interest in the development of wearable devices, the so-called exoskeletons, to assist elderly as well as patients with limb pathologies, for movement assistance and rehabilitation. In this paper, we review and discuss the state of the art of the lower limb exoskeletons that are mainly used for physical movement assistance and rehabilitation. An overview of the commonly used actuation systems is presented. According to different case studies, a classification and comparison between different types of actuators is conducted, such as hydraulic actuators, electrical motors, series elastic actuators, and artificial pneumatic muscles. Additionally, the mainly used control strategies in lower limb exoskeletons are classified and reviewed, based on three types of human-robot interfaces: the signals collected from the human body, the interaction forces between the exoskeleton and the wearer, and the signals collected from exoskeletons. Furthermore, the performances of several typical lower limb exoskeletons are discussed, and some assessment methods and performance criteria are reviewed. Finally, a discussion of the major advances that have been made, some research directions, and future challenges are presented.",
"title": ""
},
{
"docid": "8e6be29997001367542283e94c7d8f05",
"text": "Character recognition has been widely used since its inception in applications involved processing of scanned or camera-captured documents. There exist multiple scripts in which the languages are written. The scripts could broadly be divided into cursive and non-cursive scripts. The recurrent neural networks have been proved to obtain state-of-the-art results for optical character recognition. We present a thorough investigation of the performance of recurrent neural network (RNN) for cursive and non-cursive scripts. We employ bidirectional long short-term memory (BLSTM) networks, which is a variant of the standard RNN. The output layer of the architecture used to carry out our investigation is a special layer called connectionist temporal classification (CTC) which does the sequence alignment. The CTC layer takes as an input the activations of LSTM and aligns the target labels with the inputs. The results were obtained at the character level for both cursive Urdu and non-cursive English scripts are significant and suggest that the BLSTM technique is potentially more useful than the existing OCR algorithms.",
"title": ""
},
{
"docid": "05bcc85ca42945987a6f0c6c2839fa0a",
"text": "Abstract. Blockchain has many benefits including decentralization, availability, persistency, consistency, anonymity, auditability and accountability, and it also covers a wide spectrum of applications ranging from cryptocurrency, financial services, reputation system, Internet of Things, sharing economy to public and social services. Not only may blockchain be regarded as a by-product of Bitcoin cryptocurrency systems, but also it is a type of distributed ledger technology through using a trustworthy, decentralized log of totally ordered transactions. By summarizing the literature of blockchain, it is found that more papers focus on engineering implementation and realization, while little work has been done on basic theory, for example, mathematical models (Markov processes, queueing theory and game models), performance analysis and optimization of blockchain systems. In this paper, we develop queueing theory of blockchain systems and provide system performance evaluation. To do this, we design a Markovian batch-service queueing system with two different service stages, while the two stages are suitable to well express the mining process in the miners pool and the building of a new blockchain. By using the matrix-geometric solution, we obtain a system stable condition and express three key performance measures: (a) The number of transactions in the queue, (b) the number of transactions in a block, and (c) the transaction-confirmation time. Finally, We use numerical examples to verify computability of our theoretical results. Although our queueing model is simple under exponential or Poisson assumptions, our analytic method will open a series of potentially promising research in queueing theory of blockchain systems.",
"title": ""
},
{
"docid": "1b7048c328414573f55cc4aed2744496",
"text": "Structural Health Monitoring (SHM) can be understood as the integration of sensing and intelligence to enable the structure loading and damage-provoking conditions to be recorded, analyzed, localized, and predicted in such a way that nondestructive testing becomes an integral part of them. In addition, SHM systems can include actuation devices to take proper reaction or correction actions. SHM sensing requirements are very well suited for the application of optical fiber sensors (OFS), in particular, to provide integrated, quasi-distributed or fully distributed technologies. In this tutorial, after a brief introduction of the basic SHM concepts, the main fiber optic techniques available for this application are reviewed, emphasizing the four most successful ones. Then, several examples of the use of OFS in real structures are also addressed, including those from the renewable energy, transportation, civil engineering and the oil and gas industry sectors. Finally, the most relevant current technical challenges and the key sector markets are identified. This paper provides a tutorial introduction, a comprehensive background on this subject and also a forecast of the future of OFS for SHM. In addition, some of the challenges to be faced in the near future are addressed.",
"title": ""
},
{
"docid": "c736258623c7f977ebc00f5555d13e02",
"text": "We present an important step towards the solution of the problem of inverse procedural modeling by generating parametric context-free L-systems that represent an input 2D model. The L-system rules efficiently code the regular structures and the parameters represent the properties of the structure transformations. The algorithm takes as input a 2D vector image that is composed of atomic elements, such as curves and poly-lines. Similar elements are recognized and assigned terminal symbols of an L-system alphabet. The terminal symbols’ position and orientation are pair-wise compared and the transformations are stored as points in multiple 4D transformation spaces. By careful analysis of the clusters in the transformation spaces, we detect sequences of elements and code them as L-system rules. The coded elements are then removed from the clusters, the clusters are updated, and then the analysis attempts to code groups of elements in (hierarchies) the same way. The analysis ends with a single group of elements that is coded as an L-system axiom. We recognize and code branching sequences of linearly translated, scaled, and rotated elements and their hierarchies. The L-system not only represents the input image, but it can also be used for various editing operations. By changing the L-system parameters, the image can be randomized, symmetrized, and groups of elements and regular structures can be edited. By changing the terminal and non-terminal symbols, elements or groups of elements can be replaced.",
"title": ""
},
{
"docid": "f709802a6da7db7c71dfa67930111b04",
"text": "Generative adversarial networks (GANs) are a class of unsupervised machine learning algorithms that can produce realistic images from randomly-sampled vectors in a multi-dimensional space. Until recently, it was not possible to generate realistic high-resolution images using GANs, which has limited their applicability to medical images that contain biomarkers only detectable at native resolution. Progressive growing of GANs is an approach wherein an image generator is trained to initially synthesize low resolution synthetic images (8x8 pixels), which are then fed to a discriminator that distinguishes these synthetic images from real downsampled images. Additional convolutional layers are then iteratively introduced to produce images at twice the previous resolution until the desired resolution is reached. In this work, we demonstrate that this approach can produce realistic medical images in two different domains; fundus photographs exhibiting vascular pathology associated with retinopathy of prematurity (ROP), and multi-modal magnetic resonance images of glioma. We also show that fine-grained details associated with pathology, such as retinal vessels or tumor heterogeneity, can be preserved and enhanced by including segmentation maps as additional channels. We envisage several applications of the approach, including image augmentation and unsupervised classification of pathology.",
"title": ""
},
{
"docid": "29505dcb2a40123c6ff700bf1017b5ce",
"text": "The development of algorithms for hierarchical clustering has been hampered by a shortage of precise objective functions. To help address this situation, we introduce a simple cost function on hierarchies over a set of points, given pairwise similarities between those points. We show that this criterion behaves sensibly in canonical instances and that it admits a top-down construction procedure with a provably good approximation ratio.",
"title": ""
},
{
"docid": "f6df133663ab4342222d95a20cd09996",
"text": "Web 2.0 has led to the development and evolution of web-based communities and applications. These communities provide places for information sharing and collaboration. They also open the door for inappropriate online activities, such as harassment, in which some users post messages in a virtual community that are intentionally offensive to other members of the community. It is a new and challenging task to detect online harassment; currently few systems attempt to solve this problem. In this paper, we use a supervised learning approach for detecting harassment. Our technique employs content features, sentiment features, and contextual features of documents. The experimental results described herein show that our method achieves significant improvements over several baselines, including Term FrequencyInverse Document Frequency (TFIDF) approaches. Identification of online harassment is feasible when TFIDF is supplemented with sentiment and contextual feature attributes.",
"title": ""
},
{
"docid": "74da516d4a74403ac5df760b0b656b1f",
"text": "In this paper a novel and effective approach for automated audio classification is presented that is based on the fusion of different sets of features, both visual and acoustic. A number of different acoustic and visual features of sounds are evaluated and compared. These features are then fused in an ensemble that produces better classification accuracy than other state-of-the-art approaches. The visual features of sounds are built starting from the audio file and are taken from images constructed from different spectrograms, a gammatonegram, and a rhythm image. These images are divided into subwindows from which a set of texture descriptors are extracted. For each feature descriptor a different Support Vector Machine (SVM) is trained. The SVMs outputs are summed for a final decision. The proposed ensemble is evaluated on three well-known databases of music genre classification (the Latin Music Database, the ISMIR 2004 database, and the GTZAN genre collection), a dataset of Bird vocalization aiming specie recognition, and a dataset of right whale calls aiming whale detection. The MATLAB code for the ensemble of classifiers and for the extraction of the features will be publicly available (https://www.dei.unipd.it/node/2357 +Pattern Recognition and Ensemble Classifiers).",
"title": ""
},
{
"docid": "8bdd02547be77f4c825c9aed8016ddf8",
"text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.",
"title": ""
},
{
"docid": "a552f0ee9fafe273859a11f29cf7670d",
"text": "A majority of the existing stereo matching algorithms assume that the corresponding color values are similar to each other. However, it is not so in practice as image color values are often affected by various radiometric factors such as illumination direction, illuminant color, and imaging device changes. For this reason, the raw color recorded by a camera should not be relied on completely, and the assumption of color consistency does not hold good between stereo images in real scenes. Therefore, the performance of most conventional stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new stereo matching measure that is insensitive to radiometric variations between left and right images. Unlike most stereo matching measures, we use the color formation model explicitly in our framework and propose a new measure, called the Adaptive Normalized Cross-Correlation (ANCC), for a robust and accurate correspondence measure. The advantage of our method is that it is robust to lighting geometry, illuminant color, and camera parameter changes between left and right images, and does not suffer from the fattening effect unlike conventional Normalized Cross-Correlation (NCC). Experimental results show that our method outperforms other state-of-the-art stereo methods under severely different radiometric conditions between stereo images.",
"title": ""
},
{
"docid": "5e2536588d34ab0067af1bd716489531",
"text": "Recommender systems support user decision-making, and explanations of recommendations further facilitate their usefulness. Previous explanation styles are based on similar users, similar items, demographics of users, and contents of items. Contexts, such as usage scenarios and accompanying persons, have not been used for explanations, although they influence user decisions. In this paper, we propose a context style explanation method, presenting contexts suitable for consuming recommended items. The expected impacts of context style explanations are 1) persuasiveness: recognition of suitable context for usage motivates users to consume items, and 2) usefulness: envisioning context helps users to make right choices because the values of items depend on contexts. We evaluate context style persuasiveness and usefulness by a crowdsourcing-based user study in a restaurant recommendation setting. The context style explanation is compared to demographic and content style explanations. We also combine context style and other explanation styles, confirming that hybrid styles improve persuasiveness and usefulness of explanation.",
"title": ""
},
{
"docid": "291ee9114488b7b8e20e9568fbf85afe",
"text": "Today, data availability has gone from scarce to superabundant. Technologies like IoT, trends in social media and the capabilities of smart-phones are producing and digitizing lots of data that was previously unavailable. This massive increase of data creates opportunities to gain new business models, but also demands new techniques and methods of data quality in knowledge discovery, especially when the data comes from different sources (e.g., sensors, social networks, cameras, etc.). The data quality process of the data set proposes conclusions about the information they contain. This is increasingly done with the aid of data cleaning approaches. Therefore, guaranteeing a high data quality is considered as the primary goal of the data scientist. In this paper, we propose a process for data cleaning in regression models (DC-RM). The proposed data cleaning process is evaluated through a real datasets coming from the UCI Repository of Machine Learning Databases. With the aim of assessing the data cleaning process, the dataset that is cleaned by DC-RM was used to train the same regression models proposed by the authors of UCI datasets. The results achieved by the trained models with the dataset produced by DC-RM are better than or equal to that presented by the datasets’ authors.",
"title": ""
},
{
"docid": "9c15e5ef720d42e1cc6d757391946146",
"text": "Verifying robustness of neural network classifiers has attracted great interests and attention due to the success of deep neural networks and their unexpected vulnerability to adversarial perturbations. Although finding minimum adversarial distortion of neural networks (with ReLU activations) has been shown to be an NP-complete problem, obtaining a non-trivial lower bound of minimum distortion as a provable robustness guarantee is possible. However, most previous works only focused on simple fully-connected layers (multilayer perceptrons) and were limited to ReLU activations. This motivates us to propose a general and efficient framework, CNN-Cert, that is capable of certifying robustness on general convolutional neural networks. Our framework is general – we can handle various architectures including convolutional layers, max-pooling layers, batch normalization layer, residual blocks, as well as general activation functions; our approach is efficient – by exploiting the special structure of convolutional layers, we achieve up to 17 and 11 times of speed-up compared to the state-of-the-art certification algorithms (e.g. Fast-Lin, CROWN) and 366 times of speed-up compared to the dual-LP approach while our algorithm obtains similar or even better verification bounds. In addition, CNN-Cert generalizes state-of-the-art algorithms e.g. Fast-Lin and CROWN. We demonstrate by extensive experiments that our method outperforms state-of-the-art lowerbound-based certification algorithms in terms of both bound quality and speed.",
"title": ""
}
] |
scidocsrr
|
e31bba4be9c13b0611101be7b86081df
|
Multi-Level Fusion for Person Re-identification with Incomplete Marks
|
[
{
"docid": "dbe5661d99798b24856c61b93ddb2392",
"text": "Traditionally, appearance models for recognition, reacquisition and tracking problems have been evaluated independently using metrics applied to a complete system. It is shown that appearance models for these three problems can be evaluated using a cumulative matching curve on a standardized dataset, and that this one curve can be converted to a synthetic reacquisition or disambiguation rate for tracking. A challenging new dataset for viewpoint invariant pedestrian recognition (VIPeR) is provided as an example. This dataset contains 632 pedestrian image pairs from arbitrary viewpoints. Several baseline methods are tested on this dataset and the results are presented as a benchmark for future appearance models and matchin methods.",
"title": ""
},
{
"docid": "ab2159730f00662ba29e25a0e27d1799",
"text": "This paper proposes a novel and efficient re-ranking technque to solve the person re-identification problem in the surveillance application. Previous methods treat person re-identification as a special object retrieval problem, and compute the retrieval result purely based on a unidirectional matching between the probe and all gallery images. However, the correct matching may be not included in the top-k ranking result due to appearance changes caused by variations in illumination, pose, viewpoint and occlusion. To obtain more accurate re-identification results, we propose to reversely query every gallery person image in a new gallery composed of the original probe person image and other gallery person images, and revise the initial query result according to bidirectional ranking lists. The behind philosophy of our method is that images of the same person should not only have similar visual content, refer to content similarity, but also possess similar k-nearest neighbors, refer to context similarity. Furthermore, the proposed bidirectional re-ranking method can be divided into offline and online parts, where the majority of computation load is accomplished by the offline part and the online computation complexity is only proportional to the size of the gallery data set, which is especially suited to the real-time required video investigation task. Extensive experiments conducted on a series of standard data sets have validated the effectiveness and efficiency of our proposed method.",
"title": ""
},
{
"docid": "6c69be0c2a16efbe00c557650a856b21",
"text": "Visually identifying a target individual reliably in a crowded environment observed by a distributed camera network is critical to a variety of tasks in managing business information, border control, and crime prevention. Automatic re-identification of a human candidate from public space CCTV video is challenging due to spatiotemporal visual feature variations and strong visual similarity between different people, compounded by low-resolution and poor quality video data. In this work, we propose a novel method for re-identification that learns a selection and weighting of mid-level semantic attributes to describe people. Specifically, the model learns an attribute-centric, parts-based feature representation. This differs from and complements existing low-level features for re-identification that rely purely on bottom-up statistics for feature selection, which are limited in discriminating and identifying reliably visual appearances of target people appearing in different camera views under certain degrees of occlusion due to crowdedness. Our experiments demonstrate the effectiveness of our approach compared to existing feature representations when applied to benchmarking datasets.",
"title": ""
}
] |
[
{
"docid": "2eb303f3382491ae1977a3e907f197c0",
"text": "Image-to-image translation tasks have been widely investigated with Generative Adversarial Networks (GANs) and dual learning. However, existing models lack the ability to control the translated results in the target domain and their results usually lack of diversity in the sense that a fixed image usually leads to (almost) deterministic translation result. In this paper, we study a new problem, conditional image-to-image translation, which is to translate an image from the source domain to the target domain conditioned on a given image in the target domain. It requires that the generated image should inherit some domain-specific features of the conditional image from the target domain. Therefore, changing the conditional image in the target domain will lead to diverse translation results for a fixed input image from the source domain, and therefore the conditional input image helps to control the translation results. We tackle this problem with unpaired data based on GANs and dual learning. We twist two conditional translation models (one translation from A domain to B domain, and the other one from B domain to A domain) together for inputs combination and reconstruction while preserving domain independent features. We carry out experiments on men's faces from-to women's faces translation and edges to shoes&bags translations. The results demonstrate the effectiveness of our proposed method.",
"title": ""
},
{
"docid": "6c1138ec8f490f824e34d15c13593007",
"text": "We present a DSP simulation environment that will enable students to perform laboratory exercises using Android mobile devices and tablets. Due to the pervasive nature of the mobile technology, education applications designed for mobile devices have the potential to stimulate student interest in addition to offering convenient access and interaction capabilities. This paper describes a portable signal processing laboratory for the Android platform. This software is intended to be an educational tool for students and instructors in DSP, and signals and systems courses. The development of Android JDSP (A-JDSP) is carried out using the Android SDK, which is a Java-based open source development platform. The proposed application contains basic DSP functions for convolution, sampling, FFT, filtering and frequency domain analysis, with a convenient graphical user interface. A description of the architecture, functions and planned assessments are presented in this paper. Introduction Mobile technologies have grown rapidly in recent years and play a significant role in modern day computing. The pervasiveness of mobile devices opens up new avenues for developing applications in education, entertainment and personal communications. Understanding the effectiveness of smartphones and tablets in classroom instruction have been a subject of considerable research in recent years. The advantages of handheld devices over personal computers in K-12 education have been investigated 1 . The study has found that the easy accessibility and maneuverability of handheld devices lead to an increase in student interest. By incorporating mobile technologies into mathematics and applied mathematics courses, it has been shown that smartphones can broaden the scope and effectiveness of technical education in classrooms 2 . Fig 1: Splash screen of the AJDSP Android application Designing interactive applications to complement traditional teaching methods in STEM education has also been of considerable interest. The role of interactive learning in knowledge dissemination and acquisition has been discussed and it has been found to assist in the development of cognitive skills 3 . It has been showed learning potential is enhanced when education tools that possess a higher degree of interactivity are employed 4 . Software applications that incorporate visual components in learning, in order to simplify the understanding of complex theoretical concepts, have been also been developed 5-9 . These applications are generally characterized by rich user interaction and ease of accessibility. Modern mobile phones and tablets possess abundant memory and powerful processors, in addition to providing highly interactive interfaces. These features enable the design of applications that require intensive calculations to be supported on mobile devices. In particular, Android operating system based smartphones and tablets have large user base and sophisticated hardware configurations. Though several applications catering to elementary school education have been developed for Android devices, not much effort has been undertaken towards building DSP simulation applications 10 . In this paper, we propose a mobile based application that will enable students to perform Digital Signal Processing laboratories on their smartphone devices (Figure 1). In order to enable students to perform DSP labs over the Internet, the authors developed J-DSP, a visual programming environment 11-12 . J-DSP was designed as a zero footprint, standalone Java applet that can run directly on a browser. Several interactive laboratories have been developed and assessed in undergraduate courses. In addition to containing basic signal processing functions such as sampling, convolution, digital filter design and spectral analysis, J-DSP is also supported by several toolboxes. An iOS version of the software has also been developed and presented 13-15 . Here, we describe an Android based graphical application, A-JDSP, for signal processing simulation. The proposed tool has the potential to enhance DSP education by supporting both educators and students alike to teach and learn digital signal processing. The rest of the paper is organized as follows. We review related work in Section 2 and present the architecture of the proposed application in Section 3. In Section 4 we describe some of the functionalities of the software. We describe planned assessment strategies for the proposed application in Section 5. The concluding remarks and possible directions of extending this work are discussed in Section 6. Related Work Commercial packages such as MATLAB 16 and LabVIEW 17 are commonly used in signal processing research and application development. J-DSP, a web-based graphical DSP simulation package, was proposed as a non-commercial alternative for performing laboratories in undergraduate courses 3 . Though J-DSP is a light-weight application, running J-DSP over the web on mobile devices can be data-intensive. Hence, executing simulations directly on the mobile device is a suitable alternative. A mobile application that supports functions pertinent to different areas in electrical engineering, such as circuit theory, control systems and DSP has been reported 18 . However, it does not contain a comprehensive set of functions to simulate several DSP systems. In addition to this, a mobile interface for the MATLAB package has been released 19 . However, this requires an active version of MATLAB on a remote machine and a high speed internet connection to access the remote machine from the mobile device. In order to circumvent these problems, i-JDSP, an iOS version of the J-DSP software was proposed 13-15 . It implements DSP functions and algorithms optimized for mobile devices, thereby removing the need for internet connectivity. Our work builds upon J-DSP 11-12 and the iOS version of J-DSP 13-15 , and proposes to build an application for the Android operating system. Presently, to the best of our knowledge, there are no freely available Android applications that focus on signal processing education. Architecture The proposed application is implemented using Android-SDK 22 , which is a Java based development framework. The user interfaces are implemented using XML as it is well suited for Android development. The architecture of the proposed system is illustrated in Figure 2. It has five main components: (i) User Interfaces, (ii) Part Object, (iii) Part Calculator, (iv) Part View, and (v) Parts Controller. The role of each of them is described below in detail. The blocks in A-JDSP can be accessed through a function palette (user interface) and each block is associated with a view using which the function properties can be modified. The user interfaces obtain the user input data and pass them to the Part Object. Furthermore, every block has a separate Calculator function to perform the mathematical and signal processing algorithms. The Part Calculator uses the data from the input pins of the block, implements the relevant algorithms and updates the output pins. Figure 2. Architecture of AJDSP. Parts Controller Part Calculator Part Object User Interface Part View All the configuration information, such as the pin specifications, the part name and location of the block is contained in the Part Object class. In addition, the Part Object can access the data from each of the input pins of the block. When the user adds a particular block in the simulation, an instance of the Part Object class is created and is stored by a list object in the Parts Controller. The Parts Controller is an interface between the Part Object and the Part View. One of the main functions of Parts Controller is supervising block creation. The process of block creation by the Parts Controller can be described as follows: The block is configured by the user through the user interface and the block data is passed to an instance of the Part Object class. The Part Object then sends the block configuration information through the Parts Controller to the Part View, which finally renders the block. The Part View is the main graphical interface of the application. This displays the blocks and connections on the screen. It contains functionalities for selecting, moving and deleting blocks. Examples of block diagrams in the A-JDSP application for different simulations are illustrated in Figure 3(a), Figure 4(a) and Figure 5(a) respectively. Functionalities In this section, we describe some of the DSP functionalities that have been developed as part of A-JDSP. Android based Signal Generator block This generates the various input signals necessary for A-JDSP simulations. In addition to deterministic signals such as square, triangular and sinusoids; random signals from Gaussian Rayleigh and Uniform distributions can be generated. The signal related parameters such as signal frequency, time shift, mean and variance can be set through the user interface.",
"title": ""
},
{
"docid": "1c06e82a20b72c8c1ec7d493d7dbee78",
"text": "Automotive industry is facing a multitude of challenges towards sustainability that can be partly also addressed by product design: o Climate change and oil dependency. The growing weight of evidence holds that manmade greenhouse gas emissions are starting to influence the world’s climate in ways that affect all parts of the globe (IPCC 2007) – along with growing concerns over the use and availability of fossil carbon. There is a need for timely action including those in vehicle design. o Air Quality and other emissions as noise. Summer smog situa tions frequently lead to traffic restrictions for vehicles not compliant to most recent emission standards. Other emissions as noise affect up to 80 million citizens – much of it caused by the transport sector (roads, railway, aircraft, etc.) (ERF 2007). o Mobility Capability. Fulfilling the societal mobility demand is a key factor enabling (sustainable) development. This is challenged where the infrastructure is not aligned to the mobility demand and where the mobility capability of the individual transport mode (cars, trains, etc.) are not fulfilling these needs – leading to unnecessary travel time and emissions (traffic jams, non-direct connections, lack of parking opportunities, etc.). In such areas, insufficient infrastructure is the reason for 38% of CO2 vehicle emissions (SINTEF 2007). Industry has also to consider changing mobility needs in aging societies. o Safety. Road accidents (including all related transport modes as well as pedestrians) result to 1.2 million fatalities globally according to the World Bank. o Affordability. As mobility is an important precondition for any development it is important that all the mobility solutions are affordable for the targeted regions and markets. All these challenges are both, risks and business opportunities.",
"title": ""
},
{
"docid": "a05d87b064ab71549d373599700cfcbf",
"text": "We provide sets of parameters for multiplicative linear congruential generators (MLCGs) of different sizes and good performance with respect to the spectral test. For ` = 8, 9, . . . , 64, 127, 128, we take as a modulus m the largest prime smaller than 2`, and provide a list of multipliers a such that the MLCG with modulus m and multiplier a has a good lattice structure in dimensions 2 to 32. We provide similar lists for power-of-two moduli m = 2`, for multiplicative and non-multiplicative LCGs.",
"title": ""
},
{
"docid": "da3e4903974879868b87b94d7cc0bf21",
"text": "INTRODUCTION\nThe existence of maternal health service does not guarantee its use by women; neither does the use of maternal health service guarantee optimal outcomes for women. The World Health Organization recommends monitoring and evaluation of maternal satisfaction to improve the quality and efficiency of health care during childbirth. Thus, this study aimed at assessing maternal satisfaction on delivery service and factors associated with it.\n\n\nMETHODS\nCommunity based cross-sectional study was conducted in Debre Markos town from March to April 2014. Systematic random sampling technique were used to select 398 mothers who gave birth within one year. The satisfaction of mothers was measured using 19 questions which were adopted from Donabedian quality assessment framework. Binary logistic regression was fitted to identify independent predictors.\n\n\nRESULT\nAmong mothers, the overall satisfaction on delivery service was found to be 318 (81.7%). Having plan to deliver at health institution (AOR = 3.30, 95% CI: 1.38-7.9) and laboring time of less than six hours (AOR = 4.03, 95% CI: 1.66-9.79) were positively associated with maternal satisfaction on delivery service. Those mothers who gave birth using spontaneous vaginal delivery (AOR = 0.11, 95% CI: 0.023-0.51) were inversely related to maternal satisfaction on delivery service.\n\n\nCONCLUSION\nThis study revealed that the overall satisfaction of mothers on delivery service was found to be suboptimal. Reasons for delivery visit, duration of labor, and mode of delivery are independent predictors of maternal satisfaction. Thus, there is a need of an intervention on the independent predictors.",
"title": ""
},
{
"docid": "a607d049ef590f13b31566a14e158dc9",
"text": "In this video, we present our latest results towards fully autonomous flights with a small helicopter. Using a monocular camera as the only exteroceptive sensor, we fuse inertial measurements to achieve a self-calibrating power-on-and-go system, able to perform autonomous flights in previously unknown, large, outdoor spaces. Our framework achieves Simultaneous Localization And Mapping (SLAM) with previously unseen robustness in onboard aerial navigation for small platforms with natural restrictions on weight and computational power. We demonstrate successful operation in flights with altitude between 0.2-70 m, trajectories with 350 m length, as well as dynamic maneuvers with track speed of 2 m/s. All flights shown are performed autonomously using vision in the loop, with only high-level waypoints given as directions.",
"title": ""
},
{
"docid": "867ddbd84e8544a5c2d6f747756ca3d9",
"text": "We report a 166 W burst mode pulse fiber amplifier seeded by a Q-switched mode-locked all-fiber laser at 1064 nm based on a fiber-coupled semiconductor saturable absorber mirror. With a pump power of 230 W at 976 nm, the output corresponds to a power conversion efficiency of 74%. The repetition rate of the burst pulse is 20 kHz, the burst energy is 8.3 mJ, and the burst duration is ∼ 20 μs, which including about 800 mode-locked pulses at a repetition rate of 40 MHz and the width of the individual mode-locked pulse is measured to be 112 ps at the maximum output power. To avoid optical damage to the fiber, the initial mode-locked pulses were stretched to 72 ps by a bandwidth-limited fiber bragg grating. After a two-stage preamplifier, the pulse width was further stretched to 112 ps, which is a result of self-phase modulation of the pulse burst during the amplification.",
"title": ""
},
{
"docid": "370a2009695f1a18b2e6dbe6bc463bb0",
"text": "While automated vehicle technology progresses, potentially leading to a safer and more efficient traffic environment, many challenges remain within the area of human factors, such as user trust for automated driving (AD) vehicle systems. The aim of this paper is to investigate how an appropriate level of user trust for AD vehicle systems can be created via human–machine interaction (HMI). A guiding framework for implementing trust-related factors into the HMI interface is presented. This trust-based framework incorporates usage phases, AD events, trust-affecting factors, and levels explaining each event from a trust perspective. Based on the research findings, the authors recommend that HMI designers and automated vehicle manufacturers take a more holistic perspective on trust rather than focusing on single, “isolated” events, for example understanding that trust formation is a dynamic process that starts long before a user's first contact with the system, and continues long thereafter. Furthermore, factors-affecting trust change, both during user interactions with the system and over time; thus, HMI concepts need to be able to adapt. Future work should be dedicated to understanding how trust-related factors interact, as well as validating and testing the trust-based framework.",
"title": ""
},
{
"docid": "a2f3b158f1ec7e6ecb68f5ddfeaf0502",
"text": "Facial landmark detection of face alignment has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multitask learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [29]. In this technical report, we extend the method presented in our ECCV 2014 [39] paper to handle more landmark points (68 points instead of 5 major facial points) without either redesigning the deep model or involving significant increase in run time cost. This is made possible by transferring the learned 5-point model to the desired facial landmark configuration, through model fine-tuning with dense landmark annotations. Our new model achieves the state-of-the-art result on the 300-W benchmark dataset (mean error of 9.15% on the challenging IBUG subset).",
"title": ""
},
{
"docid": "78cbb5522d1eb479f194dccec53307c4",
"text": "Introduction: The roots of the cannabis plant have a long history of medical use stretching back millennia. However, the therapeutic potential of cannabis roots has been largely ignored in modern times. Discussion: In the first century, Pliny the Elder described in Natural Histories that a decoction of the root in water could be used to relieve stiffness in the joints, gout, and related conditions. By the 17th century, various herbalists were recommending cannabis root to treat inflammation, joint pain, gout, and other conditions. There has been a subsequent paucity of research in this area, with only a few studies examining the composition of cannabis root and its medical potential. Active compounds identified and measured in cannabis roots include triterpenoids, friedelin (12.8 mg/kg) and epifriedelanol (21.3 mg/kg); alkaloids, cannabisativine (2.5 mg/kg) and anhydrocannabisativine (0.3 mg/kg); carvone and dihydrocarvone; N-(p-hydroxy-β-phenylethyl)-p-hydroxy-trans-cinnamamide (1.6 mg/kg); various sterols such as sitosterol (1.5%), campesterol (0.78%), and stigmasterol (0.56%); and other minor compounds, including choline. Of note, cannabis roots are not a significant source of Δ9-tetrahydrocannabinol (THC), cannabidiol, or other known phytocannabinoids. Conclusion: The current available data on the pharmacology of cannabis root components provide significant support to the historical and ethnobotanical claims of clinical efficacy. Certainly, this suggests the need for reexamination of whole root preparations on inflammatory and malignant conditions employing modern scientific techniques.",
"title": ""
},
{
"docid": "36c556c699db79c3a84a897b7b382c73",
"text": "This paper presents a new fingerprint minutiae extraction approach that is based on the analysis of the ridge flux distribution. The considerable processing time taken by the conventional approaches, most of which use the ridge thinning process with a rather large calculation time, is a problem that has recently attracted increased attention. We observe that the features of a ridge curve are very similar to those of a vector flux such as a line of electric force or a line of magnetic force. In the proposed approach, vector flux analysis is applied to detect minutiae without using the ridge thinning process in order to reduce the computation time. The experimental results show that the proposed approach can achieve a reduction in calculation time, while achieving the same success detection rate as that of the conventional approaches.",
"title": ""
},
{
"docid": "20f4bcde35458104271e9127d8b7f608",
"text": "OBJECTIVES\nTo evaluate the effect of bulk-filling high C-factor posterior cavities on adhesion to cavity-bottom dentin.\n\n\nMETHODS\nA universal flowable composite (G-ænial Universal Flo, GC), a bulk-fill flowable base composite (SDR Posterior Bulk Fill Flowable Base, Dentsply) and a conventional paste-like composite (Z100, 3M ESPE) were bonded (G-ænial Bond, GC) into standardized cavities with different cavity configurations (C-factors), namely C=3.86 (Class-I cavity of 2.5mm deep, bulk-filled), C=5.57 (Class-I cavity of 4mm deep, bulk-filled), C=1.95 (Class-I cavity of 2.5mm deep, filled in three equal layers) and C=0.26 (flat surface). After one-week water storage, the restorations were sectioned in 4 rectangular micro-specimens and subjected to a micro-tensile bond strength (μTBS) test.\n\n\nRESULTS\nHighly significant differences were found between pairs of means of the experimental groups (Kruskal-Wallis, p<0.0001). Using the bulk-fill flowable base composite SDR (Dentsply), no significant differences in μTBS were measured among all cavity configurations (p>0.05). Using the universal flowable composite G-ænial Universal Flo (GC) and the conventional paste-like composite Z100 (3M ESPE), the μTBS to cavity-bottom dentin was not significantly different from that of SDR (Dentsply) when the cavities were layer-filled or the flat surface was build up in layers; it was however significantly lower when the Class-I cavities were filled in bulk, irrespective of cavity depth.\n\n\nSIGNIFICANCE\nThe filling technique and composite type may have a great impact on the adhesion of the composite, in particular in high C-factor cavities. While the bulk-fill flowable base composite provided satisfactory bond strengths regardless of filling technique and cavity depth, adhesion failed when conventional composites were used in bulk.",
"title": ""
},
{
"docid": "f8763404f21e3bea6744a3fb51838569",
"text": "Search engine advertising in the present day is a pronounced component of the Web. Choosing the appropriate and relevant ad for a particular query and positioning of the ad critically impacts the probability of being noticed and clicked. It also strategically impacts the revenue, the search engine shall generate from a particular Ad. Needless to say, showing the user an Ad that is relevant to his/her need greatly improves users satisfaction. For all the aforesaid reasons, its of utmost importance to correctly determine the click-through rate (CTR) of ads in a system. For frequently appearing ads, CTR is empirically measurable, but for the new ads, other means have to be devised. In this paper we propose and establish a model to predict the CTRs of advertisements adopting Logistic Regression as the effective framework for representing and constructing conditions and vulnerabilities among variables. Logistic Regression is a type of probabilistic statistical classification model that predicts a binary response from a binary predictor, based on one or more predictor variables. Advertisements that have the most elevated to be clicked are chosen using supervised machine learning calculation. We tested Logistic Regression algorithm on a one week advertisement data of size around 25 GB by considering position and impression as predictor variables. Using this prescribed model we were able to achieve around 90% accuracy for CTR estimation.",
"title": ""
},
{
"docid": "30d0ff3258decd5766d121bf97ae06d4",
"text": "In this paper, we present a new image forgery detection method based on deep learning technique, which utilizes a convolutional neural network (CNN) to automatically learn hierarchical representations from the input RGB color images. The proposed CNN is specifically designed for image splicing and copy-move detection applications. Rather than a random strategy, the weights at the first layer of our network are initialized with the basic high-pass filter set used in calculation of residual maps in spatial rich model (SRM), which serves as a regularizer to efficiently suppress the effect of image contents and capture the subtle artifacts introduced by the tampering operations. The pre-trained CNN is used as patch descriptor to extract dense features from the test images, and a feature fusion technique is then explored to obtain the final discriminative features for SVM classification. The experimental results on several public datasets show that the proposed CNN based model outperforms some state-of-the-art methods.",
"title": ""
},
{
"docid": "16cd40642b6179cbf08ed09577c12bc9",
"text": "Considerable scientific and technological efforts have been devoted to develop neuroprostheses and hybrid bionic systems that link the human nervous system with electronic or robotic prostheses, with the main aim of restoring motor and sensory functions in disabled patients. A number of neuroprostheses use interfaces with peripheral nerves or muscles for neuromuscular stimulation and signal recording. Herein, we provide a critical overview of the peripheral interfaces available and trace their use from research to clinical application in controlling artificial and robotic prostheses. The first section reviews the different types of non-invasive and invasive electrodes, which include surface and muscular electrodes that can record EMG signals from and stimulate the underlying or implanted muscles. Extraneural electrodes, such as cuff and epineurial electrodes, provide simultaneous interface with many axons in the nerve, whereas intrafascicular, penetrating, and regenerative electrodes may contact small groups of axons within a nerve fascicle. Biological, technological, and material science issues are also reviewed relative to the problems of electrode design and tissue injury. The last section reviews different strategies for the use of information recorded from peripheral interfaces and the current state of control neuroprostheses and hybrid bionic systems.",
"title": ""
},
{
"docid": "d669dfcdc2486314bd7234e1f42357de",
"text": "The Luneburg lens (LL) represents a very attractive candidate for many applications such as multibeam antennas, multifrequency scanning, and spatial scanning, due to its focusing properties. Indeed, it is a dielectric sphere on which each surface point is a frequency-independent perfect focusing point. This is produced by its index governing law n, which follows the radial distribution n/sup 2/=2-r/sup 2/, where r is the normalized radial position. Practically, an LL is manufactured as a finite number of concentric homogeneous dielectric shells - this is called a discrete LL. The inaccuracies in the curved shell manufacturing process produce intershell air gaps, which degrade the performance of the lens. Furthermore, this requires different materials whose relative dielectric constant covers the range 1-2. The paper proposes a new LL manufacturing process to avoid these drawbacks. The paper describe the theoretical background and the performance of the obtained lens.",
"title": ""
},
{
"docid": "d42a30b26ef26e7bf9b4e5766d620395",
"text": "Development of Web 2.0 enabled users to share information online, which results into an exponential growth of world wide web data. This leads to the so-called information overload problem. Recommender Systems (RS) are intelligent systems, helping on-line users to overcome information overload by providing customized recommendations on various items. In real world, people are willing to take advice and recommendation from their trustworthy friends only. Trust plays a key role in the decision-making process of a person. Incorporation of trust information in RS, results in a new class of recommender systems called trust aware recommender systems (TARS). This paper presents a survey on various implicit trust generation techniques in context of TARS. We have analyzed eight different implicit trust metrics, with respect to various properties of trust proposed by researchers in regard to TARS. Keywords—implicit trust; trust aware recommender system; trust metrics.",
"title": ""
},
{
"docid": "8bb65350ae35b66f54859444ea063bb2",
"text": "Over the course of the next 10 years, the Internet of Things (IoT) is set to have a transformational effect on the everyday technologies which surround us. Access to the data produced by these devices opens an interesting space to practice discovery based learning. This paper outlines a participatory design approach taken to develop an IoTbased ecosystem which was deployed in 8 schools across England. In particular, we describe how we designed and developed the system and reflect on some of the early experiences of students and teachers. We found that schools were willing to adopt the IoT technology within certain bounds and we outline best practices uncovered when introducing technologies to schools.",
"title": ""
},
{
"docid": "42520b1cfaec4a5f890f7f0845d5459b",
"text": "Class imbalance problem is quite pervasive in our nowadays human practice. This problem basically refers to the skewness in the data underlying distribution which, in turn, imposes many difficulties on typical machine learning algorithms. To deal with the emerging issues arising from multi-class skewed distributions, existing efforts are mainly divided into two categories: model-oriented solutions and data-oriented techniques. Focusing on the latter, this paper presents a new over-sampling technique which is inspired by Mahalanobis distance. The presented over-sampling technique, called MDO (Mahalanobis Distance-based Over-sampling technique), generates synthetic samples which have the same Mahalanobis distance from the considered class mean as other minority class examples. By preserving the covariance structure of the minority class instances and intelligently generating synthetic samples along the probability contours, new minority class instances are modelled better for learning algorithms. Moreover, MDO can reduce the risk of overlapping between different class regions which are considered as a serious challenge in multi-class problems. Our theoretical analyses and empirical observations across wide spectrum multi-class imbalanced benchmarks indicate that MDO is the method of choice by offering statistical superior MAUC and precision compared to the popular over-sampling techniques.",
"title": ""
}
] |
scidocsrr
|
f6c5620afa78588d3bfef71f6690a2fc
|
Automatic Video Summarization by Graph Modeling
|
[
{
"docid": "e5261ee5ea2df8bae7cc82cb4841dea0",
"text": "Automatic generation of video summarization is one of the key techniques in video management and browsing. In this paper, we present a generic framework of video summarization based on the modeling of viewer's attention. Without fully semantic understanding of video content, this framework takes advantage of understanding of video content, this framework takes advantage of computational attention models and eliminates the needs of complex heuristic rules in video summarization. A set of methods of audio-visual attention model features are proposed and presented. The experimental evaluations indicate that the computational attention based approach is an effective alternative to video semantic analysis for video summarization.",
"title": ""
},
{
"docid": "aea474fcacb8af1d820413b5f842056f",
"text": ".4 video sequence can be reprmented as a trajectory curve in a high dmensiond feature space. This video curve can be an~yzed by took Mar to those devdoped for planar cnrv=. h partidar, the classic biiary curve sphtting algorithm has been fonnd to be a nseti tool for video analysis. With a spEtting condition that checks the dimension&@ of the curve szgrnent being spht, the video curve can be recursivdy sirnpMed and repr~ented as a tree stmcture, and the framm that are fomtd to be junctions betieen curve segments at Merent, lev& of the tree can be used as ke-fiarn~s to summarize the tideo sequences at Merent levds of det ti. The-e keyframes can be combmed in various spatial and tempord configurations for browsing purposes. We describe a simple video player that displays the ke.fiarn~ seqnentifly and lets the user change the summarization level on the fly tith an additiond shder. 1.1 Sgrrlficance of the Problem Recent advances in digitd technology have promoted video as a vdnable information resource. I$le can now XCaS Se lected &ps from archives of thousands of hours of video footage host instantly. This new resource is e~citing, yet the sheer volume of data makes any retried task o~emhehning and its dcient. nsage impowible. Brow= ing tools that wodd flow the user to qnitiy get an idea of the content of video footage are SW important ti~~ ing components in these video database syst-Fortunately, the devdopment of browsing took is a very active area of research [13, 16, 17], and pow~ solutions are in the horizon. Browsers use as balding blocks subsets of fiarnes c~ed ke.frames, sdected because they smnmarize the video content better than their neighbors. Obviously, sdecting one keytiarne per shot does not adeqnatdy surnPermisslonlo rna~edigitalorhardcopi= of aftorpartof this v:ork for personalor classroomuse is granted v;IIhouIfee providedlhat copies are nol made or distributed for profitor commercial advantage, andthat copiesbear!hrsnoticeandihe full citationon ihe first page.To copyoxhem,se,IOrepublishtopostonservers or lo redistribute10 lists, requiresprior specific pzrrnisston znt’or a fe~ AChl hlultimedia’9S. BnsIol.UK @ 199sAchi 1-5s11>036s!9s/000s S.oo 211 marize the complex information content of long shots in which camera pan and zoom as we~ as object motion pr~ gr=sivdy unvd entirely new situations. Shots shotid be sampled by a higher or lower density of keyfrarnes according to their activity level. Sampbg techniques that would attempt to detect sigficant information changes simply by looking at pairs of frames or even several consecutive frames are bound to lack robustness in presence of noise, such as jitter occurring during camera motion or sudden ~urnination changes due to fluorescent Eght ticker, glare and photographic flash. kterestin~y, methods devdoped to detect perceptually signi$mnt points and &continuities on noisy 2D curves have succes~y addressed this type of problem, and can be extended to the mdtidimensiond curves that represent video sequences. h this paper, we describe an algorithm that can de compose a curve origin~y defined in a high dmensiond space into curve segments of low dimension. In partictiar, a video sequence can be mapped to a high dimensional polygonal trajectory curve by mapping each frame to a time dependent feature usctor, and representing these feature vectors as points. We can apply this algorithm to segment the curve of the video sequence into low ditnensiond curve segments or even fine segments. Th=e segments correspond to video footage where activity is low and frames are redundant. The idea is to detect the constituent segments of the video curoe rather than attempt to lomte the jtmctions between these segments directly. In such a dud aPProach, the curve is decomposed into segments \\vhich exkibit hearity or low dirnensiontity. Curvature discontinuiti~ are then assigned to the junctions between these segments. Detecting generrd stmcture in the video curves to derive frame locations of features such as cuts and shot transitions, rather than attempting to locate the features thernsdv~ by Iocrd analysis of frame changes, ensures that the detected positions of these features are more stable in the presence of noise which is effectively faltered out. h addition, the proposed technique butids a binary tree representation of a video sequence where branches cent tin frarn= corresponding to more dettied representations of the sequence. The user can view the video sequence at coarse or fine lev& of detds, zooming in by displaying keyfrantes corresponding to the leaves of the tree, or zooming out by displaying keyframes near the root of the tree. ●",
"title": ""
}
] |
[
{
"docid": "298d3280deb3bb326314a7324d135911",
"text": "BACKGROUND\nUterine leiomyomas are rarely seen in adolescent and to date nine leiomyoma cases have been reported under age 17. Eight of these have been treated surgically via laparotomic myomectomy.\n\n\nCASE\nA 16-year-old girl presented with a painless, lobulated necrotic mass protruding through the introitus. The mass originated from posterior uterine wall resected using hysteroscopy. Final pathology report revealed a submucous uterine leiomyoma.\n\n\nSUMMARY AND CONCLUSION\nSubmucous uterine leiomyomas may present as a vaginal mass in adolescents and can be safely treated using hysteroscopy.",
"title": ""
},
{
"docid": "8dc9f29e305d66590948896de2e0a672",
"text": "Affective events are events that impact people in positive or negative ways. When people discuss an event, people understand not only the affective polarity but also the reason for the event being positive or negative. In this paper, we aim to categorize affective events based on the reasons why events are affective. We propose that an event is affective to people often because the event describes or indicates the satisfaction or violation of certain kind of human needs. For example, the event “I broke my leg” affects people negatively because the need to be physically healthy is violated. “I play computer games” has a positive affect on people because the need to have fun is probably satisfied. To categorize affective events in narrative human language, we define seven common human need categories and introduce a new data set of randomly sampled affective events with manual human need annotations. In addition, we explored two types of methods: a LIWC lexicon based method and supervised classifiers to automatically categorize affective event expressions with respect to human needs. Experiments show that these methods achieved moderate performance on this task.",
"title": ""
},
{
"docid": "77d0786af4c5eee510a64790af497e25",
"text": "Mobile computing is a revolutionary technology, born as a result of remarkable advances in computer hardware and wireless communication. Mobile applications have become increasingly popular in recent years. Today, it is not uncommon to see people playing games or reading mails on handphones. With the rapid advances in mobile computing technology, there is an increasing demand for processing realtime transactions in a mobile environment. Hence there is a strong need for efficient transaction management, data access modes and data management, consistency control and other mobile data management issues. This survey paper will cover issues related to concurrency control in mobile database. This paper studies concurrency control problem in mobile database systems, we analyze the features of mobile database and concurrency control techniques. With the increasing number of mobile hosts there are many new solutions and algorithms for concurrency control being proposed and implemented. We wish that our paper has served as a survey of the important solutions in the fields of concurrency control in mobile database. Keywords-component; Distributed Real-time Databases, Mobile Real-time Databases, Concurrency Control, Data Similarity, and Transaction Scheduling.",
"title": ""
},
{
"docid": "3cceb3792d55bd14adb579bb9e3932ec",
"text": "BACKGROUND\nTrastuzumab, a monoclonal antibody against human epidermal growth factor receptor 2 (HER2; also known as ERBB2), was investigated in combination with chemotherapy for first-line treatment of HER2-positive advanced gastric or gastro-oesophageal junction cancer.\n\n\nMETHODS\nToGA (Trastuzumab for Gastric Cancer) was an open-label, international, phase 3, randomised controlled trial undertaken in 122 centres in 24 countries. Patients with gastric or gastro-oesophageal junction cancer were eligible for inclusion if their tumours showed overexpression of HER2 protein by immunohistochemistry or gene amplification by fluorescence in-situ hybridisation. Participants were randomly assigned in a 1:1 ratio to receive a chemotherapy regimen consisting of capecitabine plus cisplatin or fluorouracil plus cisplatin given every 3 weeks for six cycles or chemotherapy in combination with intravenous trastuzumab. Allocation was by block randomisation stratified by Eastern Cooperative Oncology Group performance status, chemotherapy regimen, extent of disease, primary cancer site, and measurability of disease, implemented with a central interactive voice recognition system. The primary endpoint was overall survival in all randomised patients who received study medication at least once. This trial is registered with ClinicalTrials.gov, number NCT01041404.\n\n\nFINDINGS\n594 patients were randomly assigned to study treatment (trastuzumab plus chemotherapy, n=298; chemotherapy alone, n=296), of whom 584 were included in the primary analysis (n=294; n=290). Median follow-up was 18.6 months (IQR 11-25) in the trastuzumab plus chemotherapy group and 17.1 months (9-25) in the chemotherapy alone group. Median overall survival was 13.8 months (95% CI 12-16) in those assigned to trastuzumab plus chemotherapy compared with 11.1 months (10-13) in those assigned to chemotherapy alone (hazard ratio 0.74; 95% CI 0.60-0.91; p=0.0046). The most common adverse events in both groups were nausea (trastuzumab plus chemotherapy, 197 [67%] vs chemotherapy alone, 184 [63%]), vomiting (147 [50%] vs 134 [46%]), and neutropenia (157 [53%] vs 165 [57%]). Rates of overall grade 3 or 4 adverse events (201 [68%] vs 198 [68%]) and cardiac adverse events (17 [6%] vs 18 [6%]) did not differ between groups.\n\n\nINTERPRETATION\nTrastuzumab in combination with chemotherapy can be considered as a new standard option for patients with HER2-positive advanced gastric or gastro-oesophageal junction cancer.\n\n\nFUNDING\nF Hoffmann-La Roche.",
"title": ""
},
{
"docid": "59932c6e6b406a41d814e651d32da9b2",
"text": "The purpose of this study was to examine the effects of virtual reality simulation (VRS) on learning outcomes and retention of disaster training. The study used a longitudinal experimental design using two groups and repeated measures. A convenience sample of associate degree nursing students enrolled in a disaster course was randomized into two groups; both groups completed web-based modules; the treatment group also completed a virtually simulated disaster experience. Learning was measured using a 20-question multiple-choice knowledge assessment pre/post and at 2 months following training. Results were analyzed using the generalized linear model. Independent and paired t tests were used to examine the between- and within-participant differences. The main effect of the virtual simulation was strongly significant (p < .0001). The VRS effect demonstrated stability over time. In this preliminary examination, VRS is an instructional method that reinforces learning and improves learning retention.",
"title": ""
},
{
"docid": "a6872c1cab2577547c9a7643a6acd03e",
"text": "Current theories and models of leadership seek to explain the influence of the hierarchical superior upon the satisfaction and performance of subordinates. While disagreeing with one another in important respects, these theories and models share an implicit assumption that while the style of leadership likely to be effective may vary according to the situation, some leadership style will be effective regardless of the situation. It has been found, however, that certain individual, task, and organizational variables act as \"substitutes for leadership,\" negating the hierarchical superior's ability to exert either positive or negative influence over subordinate attitudes and effectiveness. This paper identifies a number of such substitutes for leadership, presents scales of questionnaire items for their measurement, and reports some preliminary tests.",
"title": ""
},
{
"docid": "7dead097d1055a713bb56f9369eb1f98",
"text": "Web applications vulnerabilities allow attackers to perform malicious actions that range from gaining unauthorized account access to obtaining sensitive data. The number of web application vulnerabilities in last decade is growing constantly. Improper input validation and sanitization are reasons for most of them. The most important of these vulnerabilities based on improper input validation and sanitization is SQL injection (SQLI) vulnerability. The primary focus of our research was to develop a reliable black-box vulnerability scanner for detecting SQLI vulnerability - SQLIVDT (SQL Injection Vulnerability Detection Tool). The black-box approach is based on simulation of SQLI attacks against web applications. Thus, the scope of analysis is limited to HTTP responses and HTML pages received from the application server. In order to achieve efficient SQLI vulnerability detection, an efficient algorithm for HTML page similarity detection is used. The proposed tool showed promising results as compared to six well-known web application scanners.",
"title": ""
},
{
"docid": "edd9795ce024f8fed8057992cf3f4279",
"text": "INTRODUCTION\nIdiopathic talipes equinovarus is the most common congenital defect characterized by the presence of a congenital dysplasia of all musculoskeletal tissues distal to the knee. For many years, the treatment has been based on extensive surgery after manipulation and cast trial. Owing to poor surgical results, Ponseti developed a new treatment protocol consisting of manipulation with cast and an Achilles tenotomy. The new technique requires 4 years of orthotic management to guarantee good results. The most recent studies have emphasized how difficult it is to comply with the orthotic posttreatment protocol. Poor compliance has been attributed to parent's low educational and low income level. The purpose of the study is to evaluate if poor compliance is due to the complexity of the orthotic use or if it is related to family education, cultural, or income factors.\n\n\nMETHOD\nFifty-three patients with 73 idiopathic talipes equinovarus feet were treated with the Ponseti technique and followed for 48 months after completing the cast treatment. There was a male predominance (72%). The mean age at presentation was 1 month (range: 1 wk to 7 mo). Twenty patients (38%) had bilateral involvement, 17 patients (32%) had right side affected, and 16 patients (30%) had the left side involved. The mean time of manipulation and casting treatment was 6 weeks (range: 4 to 10 wk). Thirty-eight patients (72%) required Achilles tenotomy as stipulated by the protocol. Recurrence was considered if there was a deterioration of the Dimeglio severity score requiring remanipulation and casting.\n\n\nRESULTS\nTwenty-four out of 73 feet treated by our service showed the evidence of recurrence (33%). Sex, age at presentation, cast treatment duration, unilateral or bilateral, severity score, the necessity of Achilles tenotomy, family educational, or income level did not reveal any significant correlation with the recurrence risk. Noncompliance with the orthotic use showed a significant correlation with the recurrence rate. The noncompliance rate did not show any correlation with the patient demographic data or parent's education level, insurance, or cultural factors as proposed previously.\n\n\nCONCLUSION\nThe use of the brace is extremely relevant with the Ponseti technique outcome (recurrence) in the treatment of idiopathic talipes equinovarus. Noncompliance is not related to family education, cultural, or income level. The Ponseti postcasting orthotic protocol needs to be reevaluated to a less demanding option to improve outcome and brace compliance.",
"title": ""
},
{
"docid": "7db00719532ab0d9b408d692171d908f",
"text": "The real-time monitoring of human movement can provide valuable information regarding an individual's degree of functional ability and general level of activity. This paper presents the implementation of a real-time classification system for the types of human movement associated with the data acquired from a single, waist-mounted triaxial accelerometer unit. The major advance proposed by the system is to perform the vast majority of signal processing onboard the wearable unit using embedded intelligence. In this way, the system distinguishes between periods of activity and rest, recognizes the postural orientation of the wearer, detects events such as walking and falls, and provides an estimation of metabolic energy expenditure. A laboratory-based trial involving six subjects was undertaken, with results indicating an overall accuracy of 90.8% across a series of 12 tasks (283 tests) involving a variety of movements related to normal daily activities. Distinction between activity and rest was performed without error; recognition of postural orientation was carried out with 94.1% accuracy, classification of walking was achieved with less certainty (83.3% accuracy), and detection of possible falls was made with 95.6% accuracy. Results demonstrate the feasibility of implementing an accelerometry-based, real-time movement classifier using embedded intelligence",
"title": ""
},
{
"docid": "a2842352924cbd1deff52976425a0bd6",
"text": "Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.",
"title": ""
},
{
"docid": "cdcdbb6dca02bdafdf9f5d636acb8b3d",
"text": "BACKGROUND\nExpertise has been extensively studied in several sports over recent years. The specificities of how excellence is achieved in Association Football, a sport practiced worldwide, are being repeatedly investigated by many researchers through a variety of approaches and scientific disciplines.\n\n\nOBJECTIVE\nThe aim of this review was to identify and synthesise the most significant literature addressing talent identification and development in football. We identified the most frequently researched topics and characterised their methodologies.\n\n\nMETHODS\nA systematic review of Web of Science™ Core Collection and Scopus databases was performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. The following keywords were used: \"football\" and \"soccer\". Each word was associated with the terms \"talent\", \"expert*\", \"elite\", \"elite athlete\", \"identification\", \"career transition\" or \"career progression\". The selection was for the original articles in English containing relevant data about talent development/identification on male footballers.\n\n\nRESULTS\nThe search returned 2944 records. After screening against set criteria, a total of 70 manuscripts were fully reviewed. The quality of the evidence reviewed was generally excellent. The most common topics of analysis were (1) task constraints: (a) specificity and volume of practice; (2) performers' constraints: (a) psychological factors; (b) technical and tactical skills; (c) anthropometric and physiological factors; (3) environmental constraints: (a) relative age effect; (b) socio-cultural influences; and (4) multidimensional analysis. Results indicate that the most successful players present technical, tactical, anthropometric, physiological and psychological advantages that change non-linearly with age, maturational status and playing positions. These findings should be carefully considered by those involved in the identification and development of football players.\n\n\nCONCLUSION\nThis review highlights the need for coaches and scouts to consider the players' technical and tactical skills combined with their anthropometric and physiological characteristics scaled to age. Moreover, research addressing the psychological and environmental aspects that influence talent identification and development in football is currently lacking. The limitations detected in the reviewed studies suggest that future research should include the best performers and adopt a longitudinal and multidimensional perspective.",
"title": ""
},
{
"docid": "8ed2fa021e5b812de90795251b5c2b64",
"text": "A new implicit surface fitting method for surface reconstruction from scattered point data is proposed. The method combines an adaptive partition of unity approximation with least-squares RBF fitting and is capable of generating a high quality surface reconstruction. Given a set of points scattered over a smooth surface, first a sparse set of overlapped local approximations is constructed. The partition of unity generated from these local approximants already gives a faithful surface reconstruction. The final reconstruction is obtained by adding compactly supported RBFs. The main feature of the developed approach consists of using various regularization schemes which lead to economical, yet accurate surface reconstruction.",
"title": ""
},
{
"docid": "99fdab0b77428f98e9486d1cc7430757",
"text": "Self organizing Maps (SOMs) are most well-known, unsupervised approach of neural network that is used for clustering and are very efficient in handling large and high dimensional dataset. As SOMs can be applied on large complex set, so it can be implemented to detect credit card fraud. Online banking and ecommerce has been experiencing rapid growth over past years and will show tremendous growth even in future. So, it is very necessary to keep an eye on fraudsters and find out some ways to depreciate the rate of frauds. This paper focuses on Real Time Credit Card Fraud Detection and presents a new and innovative approach to detect the fraud by the help of SOM. Keywords— Self-Organizing Map, Unsupervised Learning, Transaction Introduction The fast and rapid growth in the credit card issuers, online merchants and card users have made them very conscious about the online frauds. Card users just want to make safe transactions while purchasing their goods and on the other hand, banks want to differentiate the legitimate as well as fraudulent users. The merchants that is mostly affected as they do not have any kind of evidence like Digital Signature wants to sell their goods only to the legitimate users to make profit and want to use a great secure system that avoid them from a great loss. Our approach of Self Organizing map can work in the large complex datasets and can cluster even unaware datasets. It is an unsupervised neural network that works even in the absence of an external teacher and provides fruitful results in detecting credit card frauds. It is interesting to note that credit card fraud affect owner the least and merchant the most. The existing legislation and card holder protection policies as well as insurance scheme affect most the merchant and customer the least. Card issuer bank also has to pay the administrative cost and infrastructure cost. Studies show that average time lag between the fraudulent transaction dates and charge back notification 1344 Mitali Bansal and Suman can be high as 72 days, thereby giving fraudster sufficient time to cause severe damage. In this paper first, you will see a brief survey of different approaches on credit card fraud detection systems,. In Section 2 we explain the design and architecture of SOM to detect Credit Card Fraud. Section 3, will represent results. Finally, Conclusion are presented in Section 4. A Survey of Credit card fraud Detection Fraud Detection Systems work by trying to identify anomalies in an environment [1]. At the early stage, the research focus lies in using rule based expert systems. The model’s rule constructed through the input of many fraud experts within the bank [2]. But when their processing is encountered, their output become was worst. Because the rule based expert system totally lies on the prior information of the data set that is generally not available easily in the case of credit card frauds. After these many Artificial Neural Network (ANN) is mostly used and solved very complex problems in a very efficient way [3]. Some believe that unsupervised methods are best to detect credit card frauds because these methods work well even in absence of external teacher. While supervised methods are based on prior data knowledge and surely needs an external teacher. Unsupervised method is used [4] [5] to detect some kind of anomalies like fraud. They do not cluster the data but provides a ranking on the list of all segments and by this ranking method they provide how much a segment is anomalous as compare to the whole data sets or other segments [6]. Dempster-Shafer Theory [1] is able to detect anomalous data. They did an experiment to detect infected E-mails by the help of D-S theory. As this theory can also be helpful because in this modern era all the new card information is sent through e-mails by the banks. Some various other approaches have also been used to detect Credit Card Frauds, one of which is ID3 pre pruning method in which decision tree is formed to detect anomalous data [7]. Artificial Neural Networks are other efficient and intelligent methods to detect credit card fraud. A compound method that is based on rule-based systems and ANN is used to detect Credit card fraud by Brause et al. [8]. Our work is based on self-organizing map that is based on unsupervised approach to detect Credit Card Fraud. We focus on to detect anomalous data by making clusters so that legitimate and fraudulent transactions can be differentiated. Collection of data and its pre-processing is also explained by giving example in fraud detection. SYSTEM DESIGN ARCHITECTURE The SOM works well in detecting Credit Card Fraud and all its interesting properties we have already discussed. Here we provide some detailed prototype and working of SOM in fraud detection. Credit Card Fraud Detection Using Self Organised Map 1345 Our Approach to detect Credit Card Fraud Using SOM Our approach towards Real time Credit Card Fraud detection is modelled by prototype. It is a multilayered approach as: 1. Initial Selection of data set. 2. Conversion of data from Symbolic to Numerical Data Set. 3. Implementation of SOM. 4. A layer of further review and decision making. This multilayered approach works well in the detection of Credit Card Fraud. As this approach is based on SOM, so finally it will cluster the data into fraudulent and genuine sets. By further review the sets can be analyzed and proper decision can be taken based on those results. The algorithm that is implemented to detect credit card fraud using Self Organizing Map is represented in Figure 1: 1. Initially choose all neurons (weight vectors wi) randomly. 2. For each input vector Ii { 2. 1) Convert all the symbolic input to the Numerical input by applying some mean and standard deviation formulas. 2. 2) Perform the initial authentication process like verification of Pin, Address, expiry date etc. } 3. Choose the learning rate parameter randomly for eg. 0. 5 4. Initially update all neurons for each input vector Ii. 5. Apply the unsupervised approach to distinguish the transaction into fraudulent and non-fraudulent cluster. 5. 1) Perform iteration till a specific cluster is not formed for a input vector. 6. By applying SOM we can divide the transactions into fraudulent (Fk) and genuine vector (Gk). 7. Perform a manually review decision. 8. Get your optimized result. Figure 1: Algorithm to detect Credit Card Fraud Initial Selection of Data Set Input vectors are generally in the form of High Dimensional Real world quantities which will be fed to a neuron matrix. These quantities are generally divided as [9]: 1346 Mitali Bansal and Suman Figure 2: Division of Transactions to form an Input Matrix In Account related quantities we can include like account number, currency of account, account opening date, last date of credit or debit available balance etc. In customer related quantities we can include customer id, customer type like high profile, low profile etc. In transaction related quantities we can have transaction no, location, currency, its timestamp etc. Conversion of Symbolic data into Numeric In credit card fraud detection, all of the data of banking transactions will be in the form of the symbolic, so there is a need to convert that symbolic data into numeric one. For example location, name, customer id etc. Conversion of all this data needs some normal distribution mechanism on the basis of frequency. The normalizing of data is done using Z = (Ni-Mi) / S where Ni is frequency of occurrence of a particular entity, M is mean and S is standard deviation. Then after all this procedure we will arrive at normalized values [9]. Implementation of SOM After getting all the normalized values, we make a input vector matrix. After that randomly weight vector is selected, this is generally termed as Neuron matrix. Dimension of this neuron matrix will be same as input vector matrix. A randomly learning parameter α is also taken. The value of this learning parameter is a small positive value that can be adjusted according to the process. The commonly used similarity matrix is the Euclidian distance given by equation 1: Distance between two neuron = jx(p)=minj││X-Wj(p)││={ Xi-Wij(p)]}, (1) Where j=1, 2......m and W is neuron or weight matrix, X is Input vectorThe main output of SOM is the patterns and cluster it has given as output vector. The cluster in credit card fraud detection will be in the form of fraudulent and genuine set represented as Fk and Gk respectively. Credit Card Fraud Detection Using Self Organised Map 1347 Review and decision making The clustering of input data into fraudulent and genuine set shows the categories of transactions performed as well as rarely performed more frequently as well as rarely by each customer. Since by the help of SOM relationship as well as hidden patterns is unearthed, we get more accuracy in our results. If the extent of suspicious activity exceeds a certain threshold value that transaction can be sent for review. So, it reduces overall processing time and complexity. Results The no of transactions taken in Test1, Test2, Test3 and Test4 are 500, 1000, 1500 and 2000 respectively. When compared to ID3 algorithm our approach presents much efficient result as shown in figure 3. Conclusion As results shows that SOM gives better results in case of detecting credit card fraud. As all parameters are verified and well represented in plots. The uniqueness of our approach lies in using the normalization and clustering mechanism of SOM of detecting credit card fraud. This helps in detecting hidden patterns of the transactions which cannot be identified to the other traditional method. With appropriate no of weight neurons and with help of thousands of iterations the network is trained and then result is verified to new transactions. The concept of normalization will help to normalize the values in other fraud cases and SOM will be helpful in detecting anomalies in credit card fraud cas",
"title": ""
},
{
"docid": "f7f609ebb1a0fcf789e5e2e5fe463718",
"text": "Individuals with generalized anxiety disorder (GAD) display poor emotional conflict adaptation, a cognitive control process requiring the adjustment of performance based on previous-trial conflict. It is unclear whether GAD-related conflict adaptation difficulties are present during tasks without emotionally-salient stimuli. We examined conflict adaptation using the N2 component of the event-related potential (ERP) and behavioral responses on a Flanker task from 35 individuals with GAD and 35 controls. Groups did not differ on conflict adaptation accuracy; individuals with GAD also displayed intact RT conflict adaptation. In contrast, individuals with GAD showed decreased amplitude N2 principal component for conflict adaptation. Correlations showed increased anxiety and depressive symptoms were associated with longer RT conflict adaptation effects and lower ERP amplitudes, but not when separated by group. We conclude that individuals with GAD show reduced conflict-related component processes that may be influenced by compensatory activity, even in the absence of emotionally-salient stimuli.",
"title": ""
},
{
"docid": "e6bb946ea2984ccb54fd37833bb55585",
"text": "11 Automatic Vehicles Counting and Recognizing (AVCR) is a very challenging topic in transport engineering having important implications for the modern transport policies. Implementing a computer-assisted AVCR in the most vital districts of a country provides a large amount of measurements which are statistically processed and analyzed, the purpose of which is to optimize the decision-making of traffic operation, pavement design, and transportation planning. Since the advent of computer vision technology, video-based surveillance of road vehicles has become a key component in developing autonomous intelligent transportation systems. In this context, this paper proposes a Pattern Recognition system which employs an unsupervised clustering algorithm with the objective of detecting, counting and recognizing a number of dynamic objects crossing a roadway. This strategy defines a virtual sensor, whose aim is similar to that of an inductive-loop in a traditional mechanism, i.e. to extract from the traffic video streaming a number of signals containing anarchic information about the road traffic. Then, the set of signals is filtered with the aim of conserving only motion’s significant patterns. Resulted data are subsequently processed by a statistical analysis technique so as to estimate and try to recognize a number of clusters corresponding to vehicles. Finite Mixture Models fitted by the EM algorithm are used to assess such clusters, which provides ∗Corresponding author Email addresses: [email protected] (Hana RABBOUCH), [email protected] (Foued SAÂDAOUI), [email protected] (Rafaa MRAIHI) Preprint submitted to Journal of LTEX Templates April 21, 2017",
"title": ""
},
{
"docid": "4d84b8dbcd0d5922fa3b20287b75c449",
"text": "We investigate an efficient parallelization of the most common iterative sparse tensor decomposition algorithms on distributed memory systems. A key operation in each iteration of these algorithms is the matricized tensor times Khatri-Rao product (MTTKRP). This operation amounts to element-wise vector multiplication and reduction depending on the sparsity of the tensor. We investigate a fine and a coarse-grain task definition for this operation, and propose hypergraph partitioning-based methods for these task definitions to achieve the load balance as well as reduce the communication requirements. We also design a distributed memory sparse tensor library, HyperTensor, which implements a well-known algorithm for the CANDECOMP-/PARAFAC (CP) tensor decomposition using the task definitions and the associated partitioning methods. We use this library to test the proposed implementation of MTTKRP in CP decomposition context, and report scalability results up to 1024 MPI ranks. We observed up to 194 fold speedups using 512 MPI processes on a well-known real world data, and significantly better performance results with respect to a state of the art implementation.",
"title": ""
},
{
"docid": "6c682f3412cc98eac5ae2a2356dccef7",
"text": "Since their inception, micro-size light emitting diode (µLED) arrays based on III-nitride semiconductors have emerged as a promising technology for a range of applications. This paper provides an overview on a decade progresses on realizing III-nitride µLED based high voltage single-chip AC/DC-LEDs without power converters to address the key compatibility issue between LEDs and AC power grid infrastructure; and high-resolution solid-state self-emissive microdisplays operating in an active driving scheme to address the need of high brightness, efficiency and robustness of microdisplays. These devices utilize the photonic integration approach by integrating µLED arrays on-chip. Other applications of nitride µLED arrays are also discussed.",
"title": ""
},
{
"docid": "14fe7deaece11b3d4cd4701199a18599",
"text": "\"Natively unfolded\" proteins occupy a unique niche within the protein kingdom in that they lack ordered structure under conditions of neutral pH in vitro. Analysis of amino acid sequences, based on the normalized net charge and mean hydrophobicity, has been applied to two sets of proteins: small globular folded proteins and \"natively unfolded\" ones. The results show that \"natively unfolded\" proteins are specifically localized within a unique region of charge-hydrophobicity phase space and indicate that a combination of low overall hydrophobicity and large net charge represent a unique structural feature of \"natively unfolded\" proteins.",
"title": ""
},
{
"docid": "041772bbad50a5bf537c0097e1331bdd",
"text": "As students read expository text, comprehension is improved by pausing to answer questions that reinforce the material. We describe an automatic question generator that uses semantic pattern recognition to create questions of varying depth and type for self-study or tutoring. Throughout, we explore how linguistic considerations inform system design. In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence. Evaluation results show a 44% reduction in the error rate relative to the best prior systems, averaging over all metrics, and up to 61% reduction in the error rate on grammaticality judgments.",
"title": ""
},
{
"docid": "d1eed1d7875930865944c98fbab5f7e1",
"text": "Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and “gold standard” is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.",
"title": ""
}
] |
scidocsrr
|
8542d6e847a4522a40e735600bd2095a
|
An efficient data replication and load balancing technique for fog computing environment
|
[
{
"docid": "780f2a97da4f18fc3710fa0ca0489ef4",
"text": "MapReduce has gradually become the framework of choice for \"big data\". The MapReduce model allows for efficient and swift processing of large scale data with a cluster of compute nodes. However, the efficiency here comes at a price. The performance of widely used MapReduce implementations such as Hadoop suffers in heterogeneous and load-imbalanced clusters. We show the disparity in performance between homogeneous and heterogeneous clusters in this paper to be high. Subsequently, we present MARLA, a MapReduce framework capable of performing well not only in homogeneous settings, but also when the cluster exhibits heterogeneous properties. We address the problems associated with existing MapReduce implementations affecting cluster heterogeneity, and subsequently present through MARLA the components and trade-offs necessary for better MapReduce performance in heterogeneous cluster and cloud environments. We quantify the performance gains exhibited by our approach against Apache Hadoop and MARIANE in data intensive and compute intensive applications.",
"title": ""
}
] |
[
{
"docid": "8c3ecd27a695fef2d009bbf627820a0d",
"text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.",
"title": ""
},
{
"docid": "2c0b3b58da77cc217e4311142c0aa196",
"text": "In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new way to adapt the SVM optimization problem to unbalanced classification, when decisions result in unequal (asymmetric) losses. Experiments show improvements over state-of-the-art procedures.",
"title": ""
},
{
"docid": "9c7f9ff55b02bd53e94df004dcc615b9",
"text": "Support Vector Machines (SVM) is among the most popular classification techniques in machine learning, hence designing fast primal SVM algorithms for large-scale datasets is a hot topic in recent years. This paper presents a new L2norm regularized primal SVM solver using Augmented Lagrange Multipliers, with linear computational cost for Lp-norm loss functions. The most computationally intensive steps (that determine the algorithmic complexity) of the proposed algorithm is purely and simply matrix-byvector multiplication, which can be easily parallelized on a multi-core server for parallel computing. We implement and integrate our algorithm into the interfaces and framework of the well-known LibLinear software toolbox. Experiments show that our algorithm is with stable performance and on average faster than the stateof-the-art solvers such as SVM perf , Pegasos and the LibLinear that integrates the TRON, PCD and DCD algorithms.",
"title": ""
},
{
"docid": "5d7dced0ed875fed0f11440dc26fffd1",
"text": "Different from conventional mobile networks designed to optimize the transmission efficiency of one particular service (e.g., streaming voice/ video) primarily, the industry and academia are reaching an agreement that 5G mobile networks are projected to sustain manifold wireless requirements, including higher mobility, higher data rates, and lower latency. For this purpose, 3GPP has launched the standardization activity for the first phase 5G system in Release 15 named New Radio (NR). To fully understand this crucial technology, this article offers a comprehensive overview of the state-of-the-art development of NR, including deployment scenarios, numerologies, frame structure, new waveform, multiple access, initial/random access procedure, and enhanced carrier aggregation (CA) for resource requests and data transmissions. The provided insights thus facilitate knowledge of design and practice for further features of NR.",
"title": ""
},
{
"docid": "96d8e375616a7ee137276d385c14a18a",
"text": "Constructivism is a theory of learning which claims that students construct knowledge rather than merely receive and store knowledge transmitted by the teacher. Constructivism has been extremely influential in science and mathematics education, but not in computer science education (CSE). This paper surveys constructivism in the context of CSE, and shows how the theory can supply a theoretical basis for debating issues and evaluating proposals.",
"title": ""
},
{
"docid": "70f0997789d4d61a6e5d44f15a6af32a",
"text": "This study reviewed the literature on cone-beam computerized tomography (CBCT) imaging of the oral and maxillofacial (OMF) region. A PUBMED search (National Library of Medicine, NCBI; revised 1 December 2007) from 1998 to December 2007 was conducted. This search revealed 375 papers, which were screened in detail. 176 papers were clinically relevant and were analyzed in detail. CBCT is used in OMF surgery and orthodontics for numerous clinical applications, particularly for its low cost, easy accessibility and low radiation compared with multi-slice computerized tomography. The results of this systematic review show that there is a lack of evidence-based data on the radiation dose for CBCT imaging. Terminology and technical device properties and settings were not consistent in the literature. An attempt was made to provide a minimal set of CBCT device-related parameters for dedicated OMF scanners as a guideline for future studies.",
"title": ""
},
{
"docid": "4d91850baa5995bc7d5e3d5e9e11fa58",
"text": "Drug risk management has many tools for minimizing risk and black-boxed warnings (BBWs) are one of those tools. Some serious adverse drug reactions (ADRs) emerge only after a drug is marketed and used in a larger population. In Thailand, additional legal warnings after drug approval, in the form of black-boxed warnings, may be applied. Review of their characteristics can assist in the development of effective risk mitigation. This study was a cross sectional review of all legal warnings imposed in Thailand after drug approval (2003-2012). Any boxed warnings for biological products and revised warnings which were not related to safety were excluded. Nine legal warnings were evaluated. Seven related to drugs classes and two to individual drugs. The warnings involved four main types of predictable ADRs: drug-disease interactions, side effects, overdose and drug-drug interactions. The average time from first ADRs reported to legal warnings implementation was 12 years. The triggers were from both safety signals in Thailand and regulatory measures in other countries outside Thailand.",
"title": ""
},
{
"docid": "dc71b53847d33e82c53f0b288da89bfa",
"text": "We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.",
"title": ""
},
{
"docid": "5e0921d158f0fa7b299fffba52f724d5",
"text": "Space syntax derives from a set of analytic measures of configuration that have been shown to correlate well with how people move through and use buildings and urban environments. Space syntax represents the open space of an environment in terms of the intervisibility of points in space. The measures are thus purely configurational, and take no account of attractors, nor do they make any assumptions about origins and destinations or path planning. Space syntax has found that, despite many proposed higher-level cognitive models, there appears to be a fundamental process that informs human and social usage of an environment. In this paper we describe an exosomatic visual architecture, based on space syntax visibility graphs, giving many agents simultaneous access to the same pre-processed information about the configuration of a space layout. Results of experiments in a simulated retail environment show that a surprisingly simple ‘random next step’ based rule outperforms a more complex ‘destination based’ rule in reproducing observed human movement behaviour. We conclude that the effects of spatial configuration on movement patterns that space syntax studies have found are consistent with a model of individual decision behaviour based on the spatial affordances offered by the morphology of the local visual field.",
"title": ""
},
{
"docid": "5910bcdd2dcacb42d47194a70679edb1",
"text": "Developing effective suspicious activity detection methods has become an increasingly critical problem for governments and financial institutions in their efforts to fight money laundering. Previous anti-money laundering (AML) systems were mostly rule-based systems which suffered from low efficiency and could can be easily learned and evaded by money launders. Recently researchers have begun to use machine learning methods to solve the suspicious activity detection problem. However nearly all these methods focus on detecting suspicious activities on accounts or individual level. In this paper we propose a sequence matching based algorithm to identify suspicious sequences in transactions. Our method aims to pick out suspicious transaction sequences using two kinds of information as reference sequences: 1) individual account’s transaction history and 2) transaction information from other accounts in a peer group. By introducing the reference sequences, we can combat those who want to evade regulations by simply learning and adapting reporting criteria, and easily detect suspicious patterns. The initial results show that our approach is highly accurate.",
"title": ""
},
{
"docid": "a0eb1b462d2169f5e7fa67690169591f",
"text": "In this paper, we present 3 different neural network-based methods to perform variable selection. OCD Optimal Cell Damage is a pruning method, which evaluates the usefulness of a variable and prunes the least useful ones (it is related to the Optimal Brain Damage method of J_.e Cun et al.). Regularization theory proposes to constrain estimators by adding a term to the cost function used to train a neural network. In the Bayesian framework, this additional term can be interpreted as the log prior to the weights distribution. We propose to use two priors (a Gaussian and a Gaussian mixture) and show that this regularization approach allows to select efficient subsets of variables. Our methods are compared to conventional statistical selection procedures and are shown to significantly improve on that.",
"title": ""
},
{
"docid": "6d3dbbf788255dfc137b1324e491fd9d",
"text": "Nowadays, a great number of healthcare data are generated every day from both medical institutions and individuals. Healthcare information exchange (HIE) has been proved to benefit the medical industry remarkably. To store and share such large amount of healthcare data is important while challenging. In this paper, we propose BlocHIE, a Blockchain-based platform for healthcare information exchange. First, we analyze the different requirements for sharing healthcare data from different sources. Based on the analysis, we employ two loosely-coupled Blockchains to handle different kinds of healthcare data. Second, we combine off-chain storage and on-chain verification to satisfy the requirements of both privacy and authenticability. Third, we propose two fairness-based packing algorithms to improve the system throughput and the fairness among users jointly. To demonstrate the practicability and effectiveness of BlocHIE, we implement BlocHIE in a minimal-viable-product way and evaluate the proposed packing algorithms extensively.",
"title": ""
},
{
"docid": "3714dabbe309545a1926e06e82f91975",
"text": "The automatic generation of anime characters offers an opportunity to bring a custom character into existence without professional skill. Besides, professionals may also take advantages of the automatic generation for inspiration on animation and game character design. however results from existing models [15, 18, 8, 22, 12] on anime image generation are blurred and distorted on an non-trivial frequency, thus generating industry-standard facial images for anime characters remains a challenge. In this paper, we propose a model that produces anime faces at high quality with promising rate of success with three-fold contributions: A clean dataset from Getchu, a suitable DRAGAN[10]-based SRResNet[11]like GAN model, and our general approach to training conditional model from image with estimated tags as conditions. We also make available a public accessible web interface.",
"title": ""
},
{
"docid": "22bb6af742b845dea702453b6b14ef3a",
"text": "Errors are prevalent in data sequences, such as GPS trajectories or sensor readings. Existing methods on cleaning sequential data employ a constraint on value changing speeds and perform constraint-based repairing. While such speed constraints are effective in identifying large spike errors, the small errors that do not significantly deviate from the truth and indeed satisfy the speed constraints can hardly be identified and repaired. To handle such small errors, in this paper, we propose a statistical based cleaning method. Rather than declaring a broad constraint of max/min speeds, we model the probability distribution of speed changes. The repairing problem is thus to maximize the likelihood of the sequence w.r.t. the probability of speed changes. We formalize the likelihood-based cleaning problem, show its NP-hardness, devise exact algorithms, and propose several approximate/heuristic methods to trade off effectiveness for efficiency. Experiments on real data sets (in various applications) demonstrate the superiority of our proposal.",
"title": ""
},
{
"docid": "658c7ae98ea4b0069a7a04af1e462307",
"text": "Exploiting packetspsila timing information for covert communication in the Internet has been explored by several network timing channels and watermarking schemes. Several of them embed covert information in the inter-packet delay. These channels, however, can be detected based on the perturbed traffic pattern, and their decoding accuracy could be degraded by jitter, packet loss and packet reordering events. In this paper, we propose a novel TCP-based timing channel, named TCPScript to address these shortcomings. TCPScript embeds messages in ldquonormalrdquo TCP data bursts and exploits TCPpsilas feedback and reliability service to increase the decoding accuracy. Our theoretical capacity analysis and extensive experiments have shown that TCPScript offers much higher channel capacity and decoding accuracy than an IP timing channel and JitterBug. On the countermeasure, we have proposed three new metrics to detect aggressive TCPScript channels.",
"title": ""
},
{
"docid": "0b7ed990d65be35f445d4243d627f9cd",
"text": "A middle-1x nm design rule multi-level NAND flash memory cell (M1X-NAND) has been successfully developed for the first time. 1) QSPT (Quad Spacer Patterning Technology) of ArF immersion lithography is used for patterning mid-1x nm rule wordline (WL). In order to achieve high performance and reliability, several integration technologies are adopted, such as 2) advanced WL air-gap process, 3) floating gate slimming process, and 4) optimized junction formation scheme. And also, by using 5) new N±1 WL Vpass scheme during programming, charge loss and program speed are greatly improved. As a result, mid-1x nm design rule NAND flash memories has been successfully realized.",
"title": ""
},
{
"docid": "17ed907c630ec22cbbb5c19b5971238d",
"text": "The fastest tools for network reachability queries use adhoc algorithms to compute all packets from a source S that can reach a destination D. This paper examines whether network reachability can be solved efficiently using existing verification tools. While most verification tools only compute reachability (“Can S reach D?”), we efficiently generalize them to compute all reachable packets. Using new and old benchmarks, we compare model checkers, SAT solvers and various Datalog implementations. The only existing verification method that worked competitively on all benchmarks in seconds was Datalog with a new composite Filter-Project operator and a Difference of Cubes representation. While Datalog is slightly slower than the Hassel C tool, it is far more flexible. We also present new results that more precisely characterize the computational complexity of network verification. This paper also provides a gentle introduction to program verification for the networking community.",
"title": ""
},
{
"docid": "db8b26229ced95bab2028d0b8eb8a43f",
"text": "OBJECTIVES\nThis study investigated isometric and isokinetic hip strength in individuals with and without symptomatic femoroacetabular impingement (FAI). The specific aims were to: (i) determine whether differences exist in isometric and isokinetic hip strength measures between groups; (ii) compare hip strength agonist/antagonist ratios between groups; and (iii) examine relationships between hip strength and self-reported measures of either hip pain or function in those with FAI.\n\n\nDESIGN\nCross-sectional.\n\n\nMETHODS\nFifteen individuals (11 males; 25±5 years) with symptomatic FAI (clinical examination and imaging (alpha angle >55° (cam FAI), and lateral centre edge angle >39° and/or positive crossover sign (combined FAI))) and 14 age- and sex-matched disease-free controls (no morphological FAI on magnetic resonance imaging) underwent strength testing. Maximal voluntary isometric contraction strength of hip muscle groups and isokinetic hip internal (IR) and external rotation (ER) strength (20°/s) were measured. Groups were compared with independent t-tests and Mann-Whitney U tests.\n\n\nRESULTS\nParticipants with FAI had 20% lower isometric abduction strength than controls (p=0.04). There were no significant differences in isometric strength for other muscle groups or peak isokinetic ER or IR strength. The ratio of isometric, but not isokinetic, ER/IR strength was significantly higher in the FAI group (p=0.01). There were no differences in ratios for other muscle groups. Angle of peak IR torque was the only feature correlated with symptoms.\n\n\nCONCLUSIONS\nIndividuals with symptomatic FAI demonstrate isometric hip abductor muscle weakness and strength imbalance in the hip rotators. Strength measurement, including agonist/antagonist ratios, may be relevant for clinical management of FAI.",
"title": ""
},
{
"docid": "d284fff9eed5e5a332bb3cfc612a081a",
"text": "This paper describes the NILC USP system that participated in SemEval-2013 Task 2: Sentiment Analysis in Twitter. Our system adopts a hybrid classification process that uses three classification approaches: rulebased, lexicon-based and machine learning approaches. We suggest a pipeline architecture that extracts the best characteristics from each classifier. Our system achieved an Fscore of 56.31% in the Twitter message-level subtask.",
"title": ""
}
] |
scidocsrr
|
5cb986d7963964b187471324fb6e2958
|
Semantic Technology
|
[
{
"docid": "1c5b71d028643c2bfc763146de242d34",
"text": "Solving Winograd Schema Problems Quan Liu†, Hui Jiang‡, Zhen-Hua Ling†, Xiaodan Zhu, Si Wei§, Yu Hu†§ † National Engineering Laboratory for Speech and Language Information Processing University of Science and Technology of China, Hefei, Anhui, China ‡ Department of Electrical Engineering and Computer Science, York University, Canada ` National Research Council Canada, Ottawa, Canada § iFLYTEK Research, Hefei, China emails: [email protected], [email protected], [email protected], [email protected] [email protected], [email protected] Abstract",
"title": ""
},
{
"docid": "172f105b7b09f19b278742af95a8d9bb",
"text": "50 AI MAGAZINE The Winograd Schema Challenge (WSC) (Levesque, Davis, and Morgenstern, 2012) was proposed by Hector Levesque in 2011 as an alternative to the Turing test. Turing (1950) had first introduced the notion of testing a computer system’s intelligence by assessing whether it could fool a human judge into thinking that it was conversing with a human rather a computer. Although intuitively appealing and arbitrarily flexible — in theory, a human can ask the computer system that is being tested wide-ranging questions about any subject desired — in practice, the execution of the Turing test turns out to be highly susceptible to systems that few people would wish to call intelligent. The Loebner Prize Competition (Christian 2011) is in particular associated with the development of chatterbots that are best viewed as successors to ELIZA (Weizenbaum 1966), the program that fooled people into thinking that they were talking to a human psychotherapist by cleverly turning a person’s statements into questions of the sort a therapist would ask. The knowledge and inference that characterize conversations of substance — for example, discussing alternate metaphors in sonnets of Shakespeare — and which Turing presented as examples of the sorts of conversation that an intelligent system should be able to produce, are absent in these chatterbots. The focus is merely on engaging in surfacelevel conversation that can fool some humans who do not delve too deeply into a conversation, for at least a few minutes, into thinking that they are speaking to another person. The widely reported triumph of the chatterbot Eugene Goostman in fooling 10 out of 30 judges to judge, after a fiveminute conversation, that it was human (University of Read-",
"title": ""
},
{
"docid": "c7ab6bc685029cc61a02f4596fef8818",
"text": "UPON Lite focuses on users, typically domain experts without ontology expertise, minimizing the role of ontology engineers.",
"title": ""
}
] |
[
{
"docid": "f601a18bec932f0cb0a56dd3f3bd5168",
"text": "The purpose of this paper is to investigate m-banking adoption and usage in Mauritius; a service relatively new in the island. The aim is to gauge awareness level and to identify those factors that inhibit or motivate m-banking usage in Mauritius. An online survey was carried out using a standard questionnaire. Convenience sampling method was used. Out of the 211 people who responded to the survey, only 169 responses were deemed to be usable. It was found that awareness of local m-banking services is quite high and usage level is reasonable. Convenience, time and effort savings, privacy, ubiquitous access to banking services, compatibility with lifestyle and banking needs were identified as the main factors motivating m-banking adoption. Perceived security risk and reliability were found to be the main obstacles to m-banking usage. It was also found that m-banking usage is not associated with age, gender and salary. There is, however, an association between education and m-banking usage. As to the limitation of the study, the use of convenience sampling limits the generalisation of the findings of the study. The study has practical implications for local banks either offering or planning to launch mbanking services in Mauritius as factors preventing and encouraging usage of m-banking are discussed. The constructs of Technology Acceptance Model (TAM) and Innovation Diffusion Theory (IDT) are integrated and extended with perceived risk and cost to ascertain mbanking usage and adoption. The study, therefore, offers valuable insights on m-banking in Mauritius.",
"title": ""
},
{
"docid": "d753d442d99ac49569aa93e33a658ad6",
"text": "Emotion is at the core of understanding ourselves and others, and the automatic expression and detection of emotion could enhance our experience with technologies. In this paper, we explore the use of computational linguistic tools to derive emotional features. Using 50 and 200 word samples of naturally-occurring blog texts, we find that some emotions are more discernible than others. In particular automated content analysis shows that authors expressing anger use the most affective language and also negative affect words; authors expressing joy use the most positive emotion words. In addition we explore the use of co-occurrence semantic space techniques to classify texts via their distance from emotional concept exemplar words: This demonstrated some success, particularly for identifying author expression of fear and joy emotions. This extends previous work by using finer-grained emotional categories and alternative linguistic analysis techniques. We relate our finding to human emotion perception and note potential applications.",
"title": ""
},
{
"docid": "df9d85417753465e489b327b83c4211d",
"text": "As an integral component of blind image deblurring, non-blind deconvolution removes image blur with a given blur kernel, which is essential but difficult due to the ill-posed nature of the inverse problem. The predominant approach is based on optimization subject to regularization functions that are either manually designed, or learned from examples. Existing learning based methods have shown superior restoration quality but are not practical enough due to their restricted model design. They solely focus on learning a prior and require to know the noise level for deconvolution. We address the gap between the optimizationbased and learning-based approaches by learning an optimizer. We propose a Recurrent Gradient Descent Network (RGDN) by systematically incorporating deep neural networks into a fully parameterized gradient descent scheme. A parameterfree update unit is used to generate updates from the current estimates, based on a convolutional neural network. By training on diverse examples, the Recurrent Gradient Descent Network learns an implicit image prior and a universal update rule through recursive supervision. Extensive experiments on synthetic benchmarks and challenging real-world images demonstrate that the proposed method is effective and robust to produce favorable results as well as practical for realworld image deblurring applications.",
"title": ""
},
{
"docid": "894164566e284f0e4318d94cc6768871",
"text": "This paper investigates the problems of signal reconstruction and blind deconvolution for graph signals that have been generated by an originally sparse input diffused through the network via the application of a graph filter operator. Assuming that the support of the sparse input signal is unknown, and that the diffused signal is observed only at a subset of nodes, we address the related problems of: 1) identifying the input and 2) interpolating the values of the diffused signal at the non-sampled nodes. We first consider the more tractable case where the coefficients of the diffusing graph filter are known and then address the problem of joint input and filter identification. The corresponding blind identification problems are formulated, novel convex relaxations are discussed, and modifications to incorporate a priori information on the sparse inputs are provided.",
"title": ""
},
{
"docid": "b039a40e0822408cf86b4ae3a356519a",
"text": "Sortagging is a versatile method for site-specific modification of proteins as applied to a variety of in vitro reactions. Here, we explore possibilities of adapting the sortase method for use in living cells. For intracellular sortagging, we employ the Ca²⁺-independent sortase A transpeptidase (SrtA) from Streptococcus pyogenes. Substrate proteins were equipped with the C-terminal sortase-recognition motif (LPXTG); we used proteins with an N-terminal (oligo)glycine as nucleophiles. We show that sortase-dependent protein ligation can be achieved in Saccharomyces cerevisiae and in mammalian HEK293T cells, both in the cytosol and in the lumen of the endoplasmic reticulum (ER). ER luminal sortagging enables secretion of the reaction products, among which circular polypeptides. Protein ligation of substrate and nucleophile occurs within 30 min of translation. The versatility of the method is shown by protein ligation of multiple substrates with green fluorescent protein-based nucleophiles in different intracellular compartments.",
"title": ""
},
{
"docid": "5e14a79e4634445291d67c3d7f4ea617",
"text": "A a new type of word-of-mouth information, online consumer product review is an emerging market phenomenon that is playing an increasingly important role in consumers’ purchase decisions. This paper argues that online consumer review, a type of product information created by users based on personal usage experience, can serve as a new element in the marketing communications mix and work as free “sales assistants” to help consumers identify the products that best match their idiosyncratic usage conditions. This paper develops a normative model to address several important strategic issues related to consumer reviews. First, we show when and how the seller should adjust its own marketing communication strategy in response to consumer reviews. Our results reveal that if the review information is sufficiently informative, the two types of product information, i.e., the seller-created product attribute information and buyer-created review information, will interact with each other. For example, when the product cost is low and/or there are sufficient expert (more sophisticated) product users, the two types of information are complements, and the seller’s best response is to increase the amount of product attribute information conveyed via its marketing communications after the reviews become available. However, when the product cost is high and there are sufficient novice (less sophisticated) product users, the two types of information are substitutes, and the seller’s best response is to reduce the amount of product attribute information it offers, even if it is cost-free to provide such information. We also derive precise conditions under which the seller can increase its profit by adopting a proactive strategy, i.e., adjusting its marketing strategies even before consumer reviews become available. Second, we identify product/market conditions under which the seller benefits from facilitating such buyer-created information (e.g., by allowing consumers to post user-based product reviews on the seller’s website). Finally, we illustrate the importance of the timing of the introduction of consumer reviews available as a strategic variable and show that delaying the availability of consumer reviews for a given product can be beneficial if the number of expert (more sophisticated) product users is relatively large and cost of the product is low.",
"title": ""
},
{
"docid": "693d9ee4f286ef03175cb302ef1b2a93",
"text": "We explore the question of whether phase-based time-of-flight (TOF) range cameras can be used for looking around corners and through scattering diffusers. By connecting TOF measurements with theory from array signal processing, we conclude that performance depends on two primary factors: camera modulation frequency and the width of the specular lobe (“shininess”) of the wall. For purely Lambertian walls, commodity TOF sensors achieve resolution on the order of meters between targets. For seemingly diffuse walls, such as posterboard, the resolution is drastically reduced, to the order of 10cm. In particular, we find that the relationship between reflectance and resolution is nonlinear—a slight amount of shininess can lead to a dramatic improvement in resolution. Since many realistic scenes exhibit a slight amount of shininess, we believe that off-the-shelf TOF cameras can look around corners.",
"title": ""
},
{
"docid": "a7284bfc38d5925cb62f04c8f6dcaae2",
"text": "The brain's electrical signals enable people without muscle control to physically interact with the world.",
"title": ""
},
{
"docid": "5aab6cd36899f3d5e3c93cf166563a3e",
"text": "Vein images generally appear darker with low contrast, which require contrast enhancement during preprocessing to design satisfactory hand vein recognition system. However, the modification introduced by contrast enhancement (CE) is reported to bring side effects through pixel intensity distribution adjustments. Furthermore, the inevitable results of fake vein generation or information loss occur and make nearly all vein recognition systems unconvinced. In this paper, a “CE-free” quality-specific vein recognition system is proposed, and three improvements are involved. First, a high-quality lab-vein capturing device is designed to solve the problem of low contrast from the view of hardware improvement. Then, a high quality lab-made database is established. Second, CFISH score, a fast and effective measurement for vein image quality evaluation, is proposed to obtain quality index of lab-made vein images. Then, unsupervised $K$ -means with optimized initialization and convergence condition is designed with the quality index to obtain the grouping results of the database, namely, low quality (LQ) and high quality (HQ). Finally, discriminative local binary pattern (DLBP) is adopted as the basis for feature extraction. For the HQ image, DLBP is adopted directly for feature extraction, and for the LQ one. CE_DLBP could be utilized for discriminative feature extraction for LQ images. Based on the lab-made database, rigorous experiments are conducted to demonstrate the effectiveness and feasibility of the proposed system. What is more, an additional experiment with PolyU database illustrates its generalization ability and robustness.",
"title": ""
},
{
"docid": "371ab18488da4e719eda8838d0d42ba8",
"text": "Research reveals dramatic differences in the ways that people from different cultures perceive the world around them. Individuals from Western cultures tend to focus on that which is object-based, categorically related, or self-relevant whereas people from Eastern cultures tend to focus more on contextual details, similarities, and group-relevant information. These different ways of perceiving the world suggest that culture operates as a lens that directs attention and filters the processing of the environment into memory. The present review describes the behavioral and neural studies exploring the contribution of culture to long-term memory and related processes. By reviewing the extant data on the role of various neural regions in memory and considering unifying frameworks such as a memory specificity approach, we identify some promising directions for future research.",
"title": ""
},
{
"docid": "08f7b46ed2d134737c62381a7e193af3",
"text": "We have been advocating cognitive developmental robotics to obtain new insight into the development of human cognitive functions by utilizing synthetic and constructive approaches. Among the different emotional functions, empathy is difficult to model, but essential for robots to be social agents in our society. In my previous review on artificial empathy (Asada, 2014b), I proposed a conceptual model for empathy development beginning with emotional contagion to envy/schadenfreude along with self/other differentiation. In this article, the focus is on two aspects of this developmental process, emotional contagion in relation to motor mimicry, and cognitive/affective aspects of the empathy. It begins with a summary of the previous review (Asada, 2014b) and an introduction to affective developmental robotics as a part of cognitive developmental robotics focusing on the affective aspects. This is followed by a review and discussion on several approaches for two focused aspects of affective developmental robotics. Finally, future issues involved in the development of a more authentic form of artificial empathy are discussed.",
"title": ""
},
{
"docid": "ae3fb9d4ea2902165a364cfc6fd15b84",
"text": "We present a novel deep learning architecture to address the natural language inference (NLI) task. Existing approaches mostly rely on simple reading mechanisms for independent encoding of the premise and hypothesis. Instead, we propose a novel dependent reading bidirectional LSTM network (DR-BiLSTM) to efficiently model the relationship between a premise and a hypothesis during encoding and inference. We also introduce a sophisticated ensemble strategy to combine our proposed models, which noticeably improves final predictions. Finally, we demonstrate how the results can be improved further with an additional preprocessing step. Our evaluation shows that DR-BiLSTM obtains the best single model and ensemble model results achieving the new state-of-the-art scores on the Stanford NLI dataset.",
"title": ""
},
{
"docid": "8053e52a227757090de0a88b80055e8c",
"text": "INTRODUCTION\nWe examined US adults' understanding of a Nutrition Facts panel (NFP), which requires health literacy (ie, prose, document, and quantitative literacy skills), and the association between label understanding and dietary behavior.\n\n\nMETHODS\nData were from the Health Information National Trends Survey, a nationally representative survey of health information seeking among US adults (N = 3,185) conducted from September 6, 2013, through December 30, 2013. Participants viewed an ice cream nutrition label and answered 4 questions that tested their ability to apply basic arithmetic and understanding of percentages to interpret the label. Participants reported their intake of sugar-sweetened soda, fruits, and vegetables. Regression analyses tested associations among label understanding, demographic characteristics, and self-reported dietary behaviors.\n\n\nRESULTS\nApproximately 24% of people could not determine the calorie content of the full ice-cream container, 21% could not estimate the number of servings equal to 60 g of carbohydrates, 42% could not estimate the effect on daily calorie intake of foregoing 1 serving, and 41% could not calculate the percentage daily value of calories in a single serving. Higher scores for label understanding were associated with consuming more vegetables and less sugar-sweetened soda, although only the association with soda consumption remained significant after adjusting for demographic factors.\n\n\nCONCLUSION\nMany consumers have difficulty interpreting nutrition labels, and label understanding correlates with self-reported dietary behaviors. The 2016 revised NFP labels may address some deficits in consumer understanding by eliminating the need to perform certain calculations (eg, total calories per package). However, some tasks still require the ability to perform calculations (eg, percentage daily value of calories). Schools have a role in teaching skills, such as mathematics, needed for nutrition label understanding.",
"title": ""
},
{
"docid": "5ed719161f832a0c5297d0ab0411f727",
"text": "Cameras and inertial sensors are each good candidates for autonomous vehicle navigation, modeling from video, and other applications that require six-degrees-of-freedom motion estimation. However, these sensors are also good candidates to be deployed together, since each can be used to resolve the ambiguities in estimated motion that result from using the other modality alone. In this paper, we consider the specific problem of estimating sensor motion and other unknowns from image, gyro, and accelerometer measurements, in environments without known fiducials. This paper targets applications where external positions references such as global positioning are not available, and focuses on the use of small and inexpensive inertial sensors, for applications where weight and cost requirements preclude the use of precision inertial navigation systems. We present two algorithms for estimating sensor motion from image and inertial measurements. The first algorithm is a batch method, which produces estimates of the sensor motion, scene structure, and other unknowns using measurements from the entire observation sequence simultaneously. The second algorithm recovers sensor motion, scene structure, and other parameters recursively, and is suitable for use with long or “infinite” sequences, in which no feature",
"title": ""
},
{
"docid": "da36aa77b26e5966bdb271da19bcace3",
"text": "We present Brian, a new clock driven simulator for spiking neural networks which is available on almost all platforms. Brian is easy to learn and use, highly flexible and easily extensible. The Brian package itself and simulations using it are all written in the Python programming language, which is very well adapted to these goals. Python is an easy, concise and highly developed language with many advanced features and development tools, excellent documentation and a large community of users providing support and extension packages. Brian allows you to write very concise, natural and readable code for simulations, and makes it quick and efficient to play with these models (for example, changing the differential equations doesn't require a recompile of the code). Figure 1 shows an example of a complete network implemented in Brian, a randomly connected network of integrate and fire neurons with exponential inhibitory and excitatory currents (the CUBA network from [1]). Defining the model, running from Seventeenth Annual Computational Neuroscience Meeting: CNS*2008 Portland, OR, USA. 19–24 July 2008",
"title": ""
},
{
"docid": "bbf9612e6073d5cc1b9ff1eec9889649",
"text": "During the last decade the amount of scientific information available on-line increased at an unprecedented rate. As a consequence, nowadays researchers are overwhelmed by an enormous and continuously growing number of articles to consider when they perform research activities like the exploration of advances in specific topics, peer reviewing, writing and evaluation of proposals. Natural Language Processing Technology represents a key enabling factor in providing scientists with intelligent patterns to access to scientific information. Extracting information from scientific papers, for example, can contribute to the development of rich scientific knowledge bases which can be leveraged to support intelligent knowledge access and question answering. Summarization techniques can reduce the size of long papers to their essential content or automatically generate state-of-the-art-reviews. Paraphrase or textual entailment techniques can contribute to the identification of relations across different scientific textual sources. This tutorial provides an overview of the most relevant tasks related to the processing of scientific documents, including but not limited to the in-depth analysis of the structure of the scientific articles, their semantic interpretation, content extraction and summarization.",
"title": ""
},
{
"docid": "e7f295b7921658b59eacbcaf1661df77",
"text": "Spyros Kotoulas IBM Research Ireland Urbanization has dramatically in creased over the past few years, and forecasts show that people’s migration to urban areas isn’t going to decrease.1 Megacities with tens of millions of inhabitants are no longer exceptional. This concentration of popu lation within cities poses numerous chal lenges in terms of both city governance and peoples’ lives. As a consequence, “smarter” solutions are necessary to bet ter address emerging requirements in urban environments. Smart cities have claimed a central position in the innovation agendas of governments, research organizations, and technology vendors, posing unique and difficult challenges in terms of problem domain, scope, and signifi cance. From a research perspective, smart cities are inherently interdisci plinary: they require investigation and cooperation across several disciplines, spanning from economics to social sci ences, from politics to infrastructure management. Specifically, research ers are actively pursuing advances in information and communication technologies (ICT). This interest has a natural focus around Internet tech nologies. This special issue illustrates a set of research efforts representing the state of the art in the field, including managing and interpreting informa tion from social media, energy infra structures, buildings, and other sensor systems.",
"title": ""
},
{
"docid": "f0b02824162279793d2c29f8aa7e28a2",
"text": "Text mining is a very exciting research area as it tries to discover knowledge from unstructured texts. These texts can be found on a computer desktop, intranets and the internet. The aim of this paper is to give an overview of text mining in the contexts of its techniques, application domains and the most challenging issue. The focus is given on fundamentals methods of text mining which include natural language possessing and information extraction. This paper also gives a short review on domains which have employed text mining. The challenging issue in text mining which is caused by the complexity in a natural language is also addressed in this paper.",
"title": ""
},
{
"docid": "e9dc75f34b398b4e0d028f4dbbb707d1",
"text": "INTRODUCTION\nUniversity students are potentially important targets for the promotion of healthy lifestyles as this may reduce the risks of lifestyle-related disorders later in life. This cross-sectional study examined differences in eating behaviours, dietary intake, weight status, and body composition between male and female university students.\n\n\nMETHODOLOGY\nA total of 584 students (59.4% females and 40.6% males) aged 20.6 +/- 1.4 years from four Malaysian universities in the Klang Valley participated in this study. Participants completed the Eating Behaviours Questionnaire and two-day 24-hour dietary recall. Body weight, height, waist circumference and percentage of body fat were measured.\n\n\nRESULTS\nAbout 14.3% of males and 22.4% of females were underweight, while 14.0% of males and 12.3% of females were overweight and obese. A majority of the participants (73.8% males and 74.6% females) skipped at least one meal daily in the past seven days. Breakfast was the most frequently skipped meal. Both males and females frequently snacked during morning tea time. Fruits and biscuits were the most frequently consumed snack items. More than half of the participants did not meet the Malaysian Recommended Nutrient Intake (RNI) for energy, vitamin C, thiamine, riboflavin, niacin, iron (females only), and calcium. Significantly more males than females achieved the RNI levels for energy, protein and iron intakes.\n\n\nCONCLUSION\nThis study highlights the presence of unhealthy eating behaviours, inadequate nutrient intake, and a high prevalence of underweight among university students. Energy and nutrient intakes differed between the sexes. Therefore, promoting healthy eating among young adults is crucial to achieve a healthy nutritional status.",
"title": ""
}
] |
scidocsrr
|
1fb2020d50c3431d79a881ab8be753f5
|
EEG-based estimation of mental fatigue by using KPCA-HMM and complexity parameters
|
[
{
"docid": "17c12cc27cd66d0289fe3baa9ab4124d",
"text": "In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.",
"title": ""
}
] |
[
{
"docid": "976f16e21505277525fa697876b8fe96",
"text": "A general technique for obtaining intermediate-band crystal filters from prototype low-pass (LP) networks which are neither symmetric nor antimetric is presented. This immediately enables us to now realize the class of low-transient responses. The bandpass (BP) filter appears as a cascade of symmetric lattice sections, obtained by partitioning the LP prototype filter, inserting constant reactances where necessary, and then applying the LP to BP frequency transformation. Manuscript received January 7, 1974; revised October 9, 1974. The author is with the Systems Development Division, Westinghouse Electric Corporation, Baltimore, Md. The cascade is composed of only two fundamental sections. Finally, the method introduced is illustrated with an example.",
"title": ""
},
{
"docid": "d5c0950e12e76c5c63b92ef7cd002782",
"text": "In recent years, machine learning approaches have been successfully applied for analysis of neuroimaging data, to help in the context of disease diagnosis. We provide, in this paper, an overview of recent support vector machine-based methods developed and applied in psychiatric neuroimaging for the investigation of schizophrenia. In particular, we focus on the algorithms implemented by our group, which have been applied to classify subjects affected by schizophrenia and healthy controls, comparing them in terms of accuracy results with other recently published studies. First we give a description of the basic terminology used in pattern recognition and machine learning. Then we separately summarize and explain each study, highlighting the main features that characterize each method. Finally, as an outcome of the comparison of the results obtained applying the described different techniques, conclusions are drawn in order to understand how much automatic classification approaches can be considered a useful tool in understanding the biological underpinnings of schizophrenia. We then conclude by discussing the main implications achievable by the application of these methods into clinical practice.",
"title": ""
},
{
"docid": "0868f1ccd67db523026f1650b03311ba",
"text": "Conflict with humans over livestock and crops seriously undermines the conservation prospects of India's large and potentially dangerous mammals such as the tiger (Panthera tigris) and elephant (Elephas maximus). This study, carried out in Bhadra Tiger Reserve in south India, estimates the extent of material and monetary loss incurred by resident villagers between 1996 and 1999 in conflicts with large felines and elephants, describes the spatiotemporal patterns of animal damage, and evaluates the success of compensation schemes that have formed the mainstay of loss-alleviation measures. Annually each household lost an estimated 12% (0.9 head) of their total holding to large felines, and approximately 11% of their annual grain production (0.82 tonnes per family) to elephants. Compensations awarded offset only 5% of the livestock loss and 14% of crop losses and were accompanied by protracted delays in the processing of claims. Although the compensation scheme has largely failed to achieve its objective of alleviating loss, its implementation requires urgent improvement if reprisal against large wild mammals is to be minimized. Furthermore, innovative schemes of livestock and crop insurance need to be tested as alternatives to compensations.",
"title": ""
},
{
"docid": "b988525d515588da8becc18c2aa21e82",
"text": "Numerical optimization has been used as an extension of vehicle dynamics simulation in order to reproduce trajectories and driving techniques used by expert race drivers and investigate the effects of several vehicle parameters in the stability limit operation of the vehicle. In this work we investigate how different race-driving techniques may be reproduced by considering different optimization cost functions. We introduce a bicycle model with suspension dynamics and study the role of the longitudinal load transfer in limit vehicle operation, i.e., when the tires operate at the adhesion limit. Finally we demonstrate that for certain vehicle configurations the optimal trajectory may include large slip angles (drifting), which matches the techniques used by rally-race drivers.",
"title": ""
},
{
"docid": "06f4ec7c6425164ee7fc38a8b26b8437",
"text": "In this paper we present a decomposition strategy for solving large scheduling problems using mathematical programming methods. Instead of formulating one huge and unsolvable MILP problem, we propose a decomposition scheme that generates smaller programs that can often be solved to global optimality. The original problem is split into subproblems in a natural way using the special features of steel making and avoiding the need for expressing the highly complex rules as explicit constraints. We present a small illustrative example problem, and several real-world problems to demonstrate the capabilities of the proposed strategy, and the fact that the solutions typically lie within 1-3% of the global optimum.",
"title": ""
},
{
"docid": "cb6223183d3602d2e67aafc0b835a405",
"text": "Electrocardiogram is widely used to diagnose the congestive heart failure (CHF). It is the primary noninvasive diagnostic tool that can guide in the management and follow-up of patients with CHF. Heart rate variability (HRV) signals which are nonlinear in nature possess the hidden signatures of various cardiac diseases. Therefore, this paper proposes a nonlinear methodology, empirical mode decomposition (EMD), for an automated identification and classification of normal and CHF using HRV signals. In this work, HRV signals are subjected to EMD to obtain intrinsic mode functions (IMFs). From these IMFs, thirteen nonlinear features such as approximate entropy $$ (E_{\\text{ap}}^{x} ) $$ ( E ap x ) , sample entropy $$ (E_{\\text{s}}^{x} ) $$ ( E s x ) , Tsallis entropy $$ (E_{\\text{ts}}^{x} ) $$ ( E ts x ) , fuzzy entropy $$ (E_{\\text{f}}^{x} ) $$ ( E f x ) , Kolmogorov Sinai entropy $$ (E_{\\text{ks}}^{x} ) $$ ( E ks x ) , modified multiscale entropy $$ (E_{{{\\text{mms}}_{y} }}^{x} ) $$ ( E mms y x ) , permutation entropy $$ (E_{\\text{p}}^{x} ) $$ ( E p x ) , Renyi entropy $$ (E_{\\text{r}}^{x} ) $$ ( E r x ) , Shannon entropy $$ (E_{\\text{sh}}^{x} ) $$ ( E sh x ) , wavelet entropy $$ (E_{\\text{w}}^{x} ) $$ ( E w x ) , signal activity $$ (S_{\\text{a}}^{x} ) $$ ( S a x ) , Hjorth mobility $$ (H_{\\text{m}}^{x} ) $$ ( H m x ) , and Hjorth complexity $$ (H_{\\text{c}}^{x} ) $$ ( H c x ) are extracted. Then, different ranking methods are used to rank these extracted features, and later, probabilistic neural network and support vector machine are used for differentiating the highly ranked nonlinear features into normal and CHF classes. We have obtained an accuracy, sensitivity, and specificity of 97.64, 97.01, and 98.24 %, respectively, in identifying the CHF. The proposed automated technique is able to identify the person having CHF alarming (alerting) the clinicians to respond quickly with proper treatment action. Thus, this method may act as a valuable tool for increasing the survival rate of many cardiac patients.",
"title": ""
},
{
"docid": "f6ac111d3ece47f9881a4f1b0ce6d4be",
"text": "An Enterprise Framework (EF) is a software architecture. Such frameworks expose a rich set of semantics and modeling paradigms for developing and extending enterprise applications. EFs are, by design, the cornerstone of an organization’s systems development activities. EFs offer a streamlined and flexible alternative to traditional tools and applications which feature numerous point solutions integrated into complex and often inflexible environments. Enterprise Frameworks play an important role since they allow reuse of design knowledge and offer techniques for creating reference models and scalable architectures for enterprise integration. These models and architectures are sufficiently flexible and powerful to be used at multiple levels, e.g. from the integration of the planning systems of geographically distributed factories, to generate a global virtual factory, down to the monitoring and control system for a single production cell. These frameworks implement or enforce well-documented standards for component integration and collaboration. The architecture of an Enterprise framework provides for ready integration with new or existing components. It defines how these components must interact with the framework and how objects will collaborate. In addition, it defines how developers' work together to develop and extend enterprise applications based on the framework. Therefore, the goal of an Enterprise framework is to reduce complexity and lifecycle costs of enterprise systems, while ensuring flexibility.",
"title": ""
},
{
"docid": "6514ddb39c465a8ca207e24e60071e7f",
"text": "The psychometric properties and clinical utility of the Separation Anxiety Avoidance Inventory, child and parent version (SAAI-C/P) were examined in two studies. The aim of the SAAI, a self- and parent-report measure, is to evaluate the avoidance relating to separation anxiety disorder (SAD) situations. In the first study, a school sample of 384 children and their parents (n = 279) participated. In the second study, 102 children with SAD and 35 children with other anxiety disorders (AD) were investigated. In addition, 93 parents of children with SAD, and 35 parents of children with other AD participated. A two-factor structure was confirmed by confirmatory factor analysis. The SAAI-C and SAAI-P demonstrated good internal consistency, test-retest reliability, as well as construct and discriminant validity. Furthermore, the SAAI was sensitive to treatment change. The parent-child agreement was substantial. Overall, these results provide support for the use of the SAAI-C/P version in clinical and research settings.",
"title": ""
},
{
"docid": "7fed6f57ba2e17db5986d47742dc1a9c",
"text": "Partial Least Squares Regression (PLSR) is a linear regression technique developed to deal with high-dimensional regressors and one or several response variables. In this paper we introduce robustified versions of the SIMPLS algorithm being the leading PLSR algorithm because of its speed and efficiency. Because SIMPLS is based on the empirical cross-covariance matrix between the response variables and the regressors and on linear least squares regression, the results are affected by abnormal observations in the data set. Two robust methods, RSIMCD and RSIMPLS, are constructed from a robust covariance matrix for high-dimensional data and robust linear regression. We introduce robust RMSECV and RMSEP values for model calibration and model validation. Diagnostic plots are constructed to visualize and classify the outliers. Several simulation results and the analysis of real data sets show the effectiveness and the robustness of the new approaches. Because RSIMPLS is roughly twice as fast as RSIMCD, it stands out as the overall best method.",
"title": ""
},
{
"docid": "08e121203b159b7d59f17d65a33580f4",
"text": "Coded structured light is an optical technique based on active stereovision that obtains the shape of objects. One shot techniques are based on projecting a unique light pattern with an LCD projector so that grabbing an image with a camera, a large number of correspondences can be obtained. Then, a 3D reconstruction of the illuminated object can be recovered by means of triangulation. The most used strategy to encode one-shot patterns is based on De Bruijn sequences. In This work a new way to design patterns using this type of sequences is presented. The new coding strategy minimises the number of required colours and maximises both the resolution and the accuracy.",
"title": ""
},
{
"docid": "38438e6a0bd03ad5f076daa1f248d001",
"text": "In recent years, research on reading-compr question and answering has drawn intense attention in Language Processing. However, it is still a key issue to the high-level semantic vector representation of quest paragraph. Drawing inspiration from DrQA [1], wh question and answering system proposed by Facebook, tl proposes an attention-based question and answering 11 adds the binary representation of the paragraph, the par; attention to the question, and the question's attentioi paragraph. Meanwhile, a self-attention calculation m proposed to enhance the question semantic vector reption. Besides, it uses a multi-layer bidirectional Lon: Term Memory(BiLSTM) networks to calculate the h semantic vector representations of paragraphs and q Finally, bilinear functions are used to calculate the pr of the answer's position in the paragraph. The expe results on the Stanford Question Answering Dataset(SQl development set show that the F1 score is 80.1% and tl 71.4%, which demonstrates that the performance of the is better than that of the model of DrQA, since they inc 2% and 1.3% respectively.",
"title": ""
},
{
"docid": "a27660db1d7d2a6724ce5fd8991479f7",
"text": "An electromyographic (EMG) activity pattern for individual muscles in the gait cycle exhibits a great deal of intersubject, intermuscle and context-dependent variability. Here we examined the issue of common underlying patterns by applying factor analysis to the set of EMG records obtained at different walking speeds and gravitational loads. To this end healthy subjects were asked to walk on a treadmill at speeds of 1, 2, 3 and 5 kmh(-1) as well as when 35-95% of the body weight was supported using a harness. We recorded from 12-16 ipsilateral leg and trunk muscles using both surface and intramuscular recording and determined the average, normalized EMG of each record for 10-15 consecutive step cycles. We identified five basic underlying factors or component waveforms that can account for about 90% of the total waveform variance across different muscles during normal gait. Furthermore, while activation patterns of individual muscles could vary dramatically with speed and gravitational load, both the limb kinematics and the basic EMG components displayed only limited changes. Thus, we found a systematic phase shift of all five factors with speed in the same direction as the shift in the onset of the swing phase. This tendency for the factors to be timed according to the lift-off event supports the idea that the origin of the gait cycle generation is the propulsion rather than heel strike event. The basic invariance of the factors with walking speed and with body weight unloading implies that a few oscillating circuits drive the active muscles to produce the locomotion kinematics. A flexible and dynamic distribution of these basic components to the muscles may result from various descending and proprioceptive signals that depend on the kinematic and kinetic demands of the movements.",
"title": ""
},
{
"docid": "b6a8f45bd10c30040ed476b9d11aa908",
"text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.",
"title": ""
},
{
"docid": "39e332a58625a12ef3e14c1a547a8cad",
"text": "This paper presents an overview of the recent achievements in the held of substrate integrated waveguides (SIW) technology, with particular emphasis on the modeling strategy and design considerations of millimeter-wave integrated circuits as well as the physical interpretation of the operation principles and loss mechanisms of these structures. The most common numerical methods for modeling both SIW interconnects and circuits are presented. Some considerations and guidelines for designing SIW structures, interconnects and circuits are discussed, along with the physical interpretation of the major issues related to radiation leakage and losses. Examples of SIW circuits and components operating in the microwave and millimeter wave bands are also reported, with numerical and experimental results.",
"title": ""
},
{
"docid": "49517920ddecf10a384dc3e98e39459b",
"text": "Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.",
"title": ""
},
{
"docid": "5f8ac79ad733d031ecaff19a748666e2",
"text": "Decision making techniques used to help evaluate current suppliers should aim at classifying performance of individual suppliers against desired levels of performance so as to devise suitable action plans to increase suppliers' performance and capabilities. Moreover, decision making related to what course of action to take for a particular supplier depends on the evaluation of short and long term factors of performance, as well as on the type of item to be supplied. However, most of the propositions found in the literature do not consider the type of supplied item and are more suitable for ordering suppliers rather than categorizing them. To deal with this limitation, this paper presents a new approach based on fuzzy inference combined with the simple fuzzy grid method to help decisionmaking in the supplier evaluation for development. This approach follows a procedure for pattern classification based on decision rules to categorize supplier performance according to the item category so as to indicate strengths and weaknesses of current suppliers, helping decision makers review supplier development action plans. Applying the method to a company in the automotive sector shows that it brings objectivity and consistency to supplier evaluation, supporting consensus building through the decision making process. Critical items can be identified which aim at proposing directives for managing and developing suppliers for leverage, bottleneck and strategic items. It also helps to identify suppliers in need of attention or suppliers that should be replaced. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d09b5d295fb78756cc6141471a2415a3",
"text": "One-point (or n-point) crossover has the property that schemata exhibited by both parents are ‘respected’transferred to the offspring without disruption. In addition, new schemata may, potentially, be created by combination of the genes on which the parents differ. Some argue that the preservation of similarity is the important aspect of crossover, and that the combination of differences (key to the building-block hypothesis) is unlikely to be valuable. In this paper, we discuss the operation of recombination on a hierarchical buildingblock problem. Uniform crossover, which preserves similarity, fails on this problem. Whereas, one-point crossover, that both preserves similarity and combines differences, succeeds. In fact, a somewhat perverse recombination operator, that combines differences but destroys schemata that are common to both parents, also succeeds. Thus, in this problem, combination of schemata from dissimilar parents is required, and preserving similarity is not required. The test problem represents an extreme case, but it serves to illustrate the different aspects of recombination that are available in regular operators such as one-point crossover.",
"title": ""
},
{
"docid": "0d0fae25e045c730b68d63e2df1dfc7f",
"text": "It is very difficult to over-emphasize the benefits of accurate data. Errors in data are generally the most expensive aspect of data entry, costing the users even much more compared to the original data entry. Unfortunately, these costs are intangibles or difficult to measure. If errors are detected at an early stage then it requires little cost to remove the errors. Incorrect and misleading data lead to all sorts of unpleasant and unnecessary expenses. Unluckily, it would be very expensive to correct the errors after the data has been processed, particularly when the processed data has been converted into the knowledge for decision making. No doubt a stitch in time saves nine i.e. a timely effort will prevent more work at later stage. Moreover, time spent in processing errors can also have a significant cost. One of the major problems with automated data entry systems are errors. In this paper we discuss many well known techniques to minimize errors, different cleansing approaches and, suggest how we can improve accuracy rate. Framework available for data cleansing offer the fundamental services such as attribute selection, formation of tokens, selection of clustering algorithms, selection of eliminator functions etc.",
"title": ""
},
{
"docid": "75233d6d94fec1f43fa02e8043470d4d",
"text": "Out-of-autoclave (OoA) prepreg materials and methods have gained acceptance over the past decade because of the ability to produce autoclave-quality components under vacuum-bag-only (VBO) cure. To achieve low porosity and tight dimensional tolerances, VBO prepregs rely on specific microstructural features and processing techniques. Furthermore, successful cure is contingent upon appropriate material property and process parameter selection. In this article, we review the existing literature on VBO prepreg processing to summarize and synthesize knowledge on these issues. First, the context, development, and defining properties of VBO prepregs are presented. The key processing phenomena and the influence on quality are subsequently described. Finally, cost and environmental performance are considered. Throughout, we highlight key considerations for VBO prepreg processing and identify areas where further study is required.",
"title": ""
},
{
"docid": "81e49c8763f390e4b86968ff91214b5a",
"text": "Choreographies allow business and service architects to specify with a global perspective the requirements of applications built over distributed and interacting software entities. While being a standard for the abstract specification of business workflows and collaboration between services, the Business Process Modeling Notation (BPMN) has only been recently extended into BPMN 2.0 to support an interaction model of choreography, which, as opposed to interconnected interface models, is better suited to top-down development processes. An important issue with choreographies is real-izability, i.e., whether peers obtained via projection from a choreography interact as prescribed in the choreography requirements. In this work, we propose a realizability checking approach for BPMN 2.0 choreographies. Our approach is formally grounded on a model transformation into the LOTOS NT process algebra and the use of equivalence checking. It is also completely tool-supported through interaction with the Eclipse BPMN 2.0 editor and the CADP process algebraic toolbox.",
"title": ""
}
] |
scidocsrr
|
ce75749e2f558ac953323ec5541b7b67
|
Analysis of the 802.11i 4-way handshake
|
[
{
"docid": "8dcb99721a06752168075e6d45ee64c7",
"text": "The convenience of 802.11-based wireless access networks has led to widespread deployment in the consumer, industrial and military sectors. However, this use is predicated on an implicit assumption of confidentiality and availability. While the secu rity flaws in 802.11’s basic confidentially mechanisms have been widely publicized, the threats to network availability are far less widely appreciated. In fact, it has been suggested that 802.11 is highly suscepti ble to malicious denial-of-service (DoS) attacks tar geting its management and media access protocols. This paper provides an experimental analysis of such 802.11-specific attacks – their practicality, their ef ficacy and potential low-overhead implementation changes to mitigate the underlying vulnerabilities.",
"title": ""
}
] |
[
{
"docid": "3653e29e71d70965317eb4c450bc28da",
"text": "This paper comprises an overview of different aspects for wire tension control devices and algorithms according to the state of industrial use and state of research. Based on a typical winding task of an orthocyclic winding scheme, possible new principles for an alternative piezo-electric actuator and an electromechanical tension control will be derived and presented.",
"title": ""
},
{
"docid": "3eebecff1cb89f5490602f43717902b7",
"text": "Radiation therapy (RT) is an integral part of prostate cancer treatment across all stages and risk groups. Immunotherapy using a live, attenuated, Listeria monocytogenes-based vaccines have been shown previously to be highly efficient in stimulating anti-tumor responses to impact on the growth of established tumors in different tumor models. Here, we evaluated the combination of RT and immunotherapy using Listeria monocytogenes-based vaccine (ADXS31-142) in a mouse model of prostate cancer. Mice bearing PSA-expressing TPSA23 tumor were divided to 5 groups receiving no treatment, ADXS31-142, RT (10 Gy), control Listeria vector and combination of ADXS31-142 and RT. Tumor growth curve was generated by measuring the tumor volume biweekly. Tumor tissue, spleen, and sera were harvested from each group for IFN-γ ELISpot, intracellular cytokine assay, tetramer analysis, and immunofluorescence staining. There was a significant tumor growth delay in mice that received combined ADXS31-142 and RT treatment as compared with mice of other cohorts and this combined treatment causes complete regression of their established tumors in 60 % of the mice. ELISpot and immunohistochemistry of CD8+ cytotoxic T Lymphocytes (CTL) showed a significant increase in IFN-γ production in mice with combined treatment. Tetramer analysis showed a fourfold and a greater than 16-fold increase in PSA-specific CTLs in animals receiving ADXS31-142 alone and combination treatment, respectively. A similar increase in infiltration of CTLs was observed in the tumor tissues. Combination therapy with RT and Listeria PSA vaccine causes significant tumor regression by augmenting PSA-specific immune response and it could serve as a potential treatment regimen for prostate cancer.",
"title": ""
},
{
"docid": "89fd46da8542a8ed285afb0cde9cc236",
"text": "Collaborative Filtering with Implicit Feedbacks (e.g., browsing or clicking records), named as CF-IF, is demonstrated to be an effective way in recommender systems. Existing works of CF-IF can be mainly classified into two categories, i.e., point-wise regression based and pairwise ranking based, where the latter one relaxes assumption and usually obtains better performance in empirical studies. In real applications, implicit feedback is often very sparse, causing CF-IF based methods to degrade significantly in recommendation performance. In this case, side information (e.g., item content) is usually introduced and utilized to address the data sparsity problem. Nevertheless, the latent feature representation learned from side information by topic model may not be very effective when the data is too sparse. To address this problem, we propose collaborative deep ranking (CDR), a hybrid pair-wise approach with implicit feedback, which leverages deep feature representation of item content into Bayesian framework of pair-wise ranking model in this paper. The experimental analysis on a real-world dataset shows CDR outperforms three state-of-art methods in terms of recall metric under different sparsity level.",
"title": ""
},
{
"docid": "06cc255e124702878e2106bf0e8eb47c",
"text": "Agent technology has been recognized as a promising paradigm for next generation manufacturing systems. Researchers have attempted to apply agent technology to manufacturing enterprise integration, enterprise collaboration (including supply chain management and virtual enterprises), manufacturing process planning and scheduling, shop floor control, and to holonic manufacturing as an implementation methodology. This paper provides an update review on the recent achievements in these areas, and discusses some key issues in implementing agent-based manufacturing systems such as agent encapsulation, agent organization, agent coordination and negotiation, system dynamics, learning, optimization, security and privacy, tools and standards. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f2492c40f98e3cccc3ac3ab7accf4af7",
"text": "Accurate detection of single-trial event-related potentials (ERPs) in the electroencephalogram (EEG) is a difficult problem that requires efficient signal processing and machine learning techniques. Supervised spatial filtering methods that enhance the discriminative information in EEG data are commonly used to improve single-trial ERP detection. We propose a convolutional neural network (CNN) with a layer dedicated to spatial filtering for the detection of ERPs and with training based on the maximization of the area under the receiver operating characteristic curve (AUC). The CNN is compared with three common classifiers: 1) Bayesian linear discriminant analysis; 2) multilayer perceptron (MLP); and 3) support vector machines. Prior to classification, the data were spatially filtered with xDAWN (for the maximization of the signal-to-signal-plus-noise ratio), common spatial pattern, or not spatially filtered. The 12 analytical techniques were tested on EEG data recorded in three rapid serial visual presentation experiments that required the observer to discriminate rare target stimuli from frequent nontarget stimuli. Classification performance discriminating targets from nontargets depended on both the spatial filtering method and the classifier. In addition, the nonlinear classifier MLP outperformed the linear methods. Finally, training based AUC maximization provided better performance than training based on the minimization of the mean square error. The results support the conclusion that the choice of the systems architecture is critical and both spatial filtering and classification must be considered together.",
"title": ""
},
{
"docid": "25e50a3e98b58f833e1dd47aec94db21",
"text": "Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motion patterns instead of full actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoiding falsely forcing all categories to share knowledge. Experimental results on multiple public data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods.",
"title": ""
},
{
"docid": "3467f4be08c4b8d6cd556f04f324ce67",
"text": "Round robin arbiter (RRA) is a critical block in nowadays designs. It is widely found in System-on-chips and Network-on-chips. The need of an efficient RRA has increased extensively as it is a limiting performance block. In this paper, we deliver a comparative review between different RRA architectures found in literature. We also propose a novel efficient RRA architecture. The FPGA implementation results of the previous RRA architectures and our proposed one are given, that show the improvements of the proposed RRA.",
"title": ""
},
{
"docid": "c69e002a71132641947d8e30bb2e74f7",
"text": "In this paper, we investigate a new stealthy attack simultaneously compromising actuators and sensors. This attack is referred to as coordinated attack. We show that the coordinated attack is capable of deriving the system states far away from the desired without being detected. Furthermore, designing such an attack practically does not require knowledge on target systems, which makes the attack much more dangerous compared to the other known attacks. Also, we present a method to detect the coordinated attack. To validate the effect of the proposed attack, we carry out experiments using a quadrotor.",
"title": ""
},
{
"docid": "7f68d6a6432f55684ad79a4f79406dab",
"text": "Half of patients with heart failure (HF) have a preserved left ventricular ejection fraction (HFpEF). Morbidity and mortality in HFpEF are similar to values observed in patients with HF and reduced EF, yet no effective treatment has been identified. While early research focused on the importance of diastolic dysfunction in the pathophysiology of HFpEF, recent studies have revealed that multiple non-diastolic abnormalities in cardiovascular function also contribute. Diagnosis of HFpEF is frequently challenging and relies upon careful clinical evaluation, echo-Doppler cardiography, and invasive haemodynamic assessment. In this review, the principal mechanisms, diagnostic approaches, and clinical trials are reviewed, along with a discussion of novel treatment strategies that are currently under investigation or hold promise for the future.",
"title": ""
},
{
"docid": "3edf5d1cce2a26fbf5c2cc773649629b",
"text": "We conducted three experiments to investigate the mental images associated with idiomatic phrases in English. Our hypothesis was that people should have strong conventional images for many idioms and that the regularity in people's knowledge of their images for idioms is due to the conceptual metaphors motivating the figurative meanings of idioms. In the first study, subjects were asked to form and describe their mental images for different idiomatic expressions. Subjects were then asked a series of detailed questions about their images regarding the causes and effects of different events within their images. We found high consistency in subjects' images of idioms with similar figurative meanings despite differences in their surface forms (e.g., spill the beans and let the cat out of the bag). Subjects' responses to detailed questions about their images also showed a high degree of similarity in their answers. Further examination of subjects' imagery protocols supports the idea that the conventional images and knowledge associated with idioms are constrained by the conceptual metaphors (e.g., the MIND IS A CONTAINER and IDEAS ARE ENTITIES) which motivate the figurative meanings of idioms. The results of two control studies showed that the conventional images associated with idioms are not solely based on their figurative meanings (Experiment 2) and that the images associated with literal phrases (e.g., spill the peas) were quite varied and unlikely to be constrained by conceptual metaphor (Experiment 3). These findings support the view that idioms are not \"dead\" metaphors with their meanings being arbitrarily determined. Rather, the meanings of many idioms are motivated by speakers' tacit knowledge of the conceptual metaphors underlying the meanings of these figurative phrases.",
"title": ""
},
{
"docid": "69ced55a44876f7cc4e57f597fcd5654",
"text": "A wideband circularly polarized (CP) antenna with a conical radiation pattern is investigated. It consists of a feeding probe and parasitic dielectric parallelepiped elements that surround the probe. Since the structure of the antenna looks like a bird nest, it is named as bird-nest antenna. The probe, which protrudes from a circular ground plane, operates in its fundamental monopole mode that generates omnidirectional linearly polarized (LP) fields. The dielectric parallelepipeds constitute a wave polarizer that converts omnidirectional LP fields of the probe into omnidirectional CP fields. To verify the design, a prototype operating in C band was fabricated and measured. The reflection coefficient, axial ratio (AR), radiation pattern, and antenna gain are studied, and reasonable agreement between the measured and simulated results is observed. The prototype has a 10-dB impedance bandwidth of 41.0% and a 3-dB AR bandwidth of as wide as 54.9%. A parametric study was carried out to characterize the proposed antenna. Also, a design guideline is given to facilitate designs of the antenna.",
"title": ""
},
{
"docid": "db3abbca12b7a1c4e611aa3707f65563",
"text": "This paper describes the background and methods for the prod uction of CIDOC-CRM compliant data sets from diverse collec tions of source data. The construction of such data sets is based on data in column format, typically exported for databases, as well as free text, typically created through scanning and OCR proce ssing or transcription.",
"title": ""
},
{
"docid": "7db5807fc15aeb8dfe4669a8208a8978",
"text": "This document is an output from a project funded by the UK Department for International Development (DFID) for the benefit of developing countries. The views expressed are not necessarily those of DFID. Contents Contents i List of tables ii List of figures ii List of boxes ii Acronyms iii Acknowledgements iv Summary 1 1. Introduction: why worry about disasters? 7 Objectives of this Study 7 Global disaster trends 7 Why donors should be concerned 9 What donors can do 9 2. What makes a disaster? 11 Characteristics of a disaster 11 Disaster risk reduction 12 The diversity of hazards 12 Vulnerability and capacity, coping and adaptation 15 Resilience 16 Poverty and vulnerability: links and differences 16 'The disaster management cycle' 17 3. Why should disasters be a development concern? 19 3.1 Disasters hold back development 19 Disasters undermine efforts to achieve the Millennium Development Goals 19 Macroeconomic impacts of disasters 21 Reallocation of resources from development to emergency assistance 22 Disaster impact on communities and livelihoods 23 3.2 Disasters are rooted in development failures 25 Dominant development models and risk 25 Development can lead to disaster 26 Poorly planned attempts to reduce risk can make matters worse 29 Disaster responses can themselves exacerbate risk 30 3.3 'Disaster-proofing' development: what are the gains? 31 From 'vicious spirals' of failed development and disaster risk… 31 … to 'virtuous spirals' of risk reduction 32 Disaster risk reduction can help achieve the Millennium Development Goals 33 … and can be cost-effective 33 4. Why does development tend to overlook disaster risk? 36 4.1 Introduction 36 4.2 Incentive, institutional and funding structures 36 Political incentives and governance in disaster prone countries 36 Government-donor relations and moral hazard 37 Donors and multilateral agencies 38 NGOs 41 4.3 Lack of exposure to and information on disaster issues 41 4.4 Assumptions about the risk-reducing capacity of development 43 ii 5. Tools for better integrating disaster risk reduction into development 45 Introduction 45 Poverty Reduction Strategy Papers (PRSPs) 45 UN Development Assistance Frameworks (UNDAFs) 47 Country assistance plans 47 National Adaptation Programmes of Action (NAPAs) 48 Partnership agreements with implementing agencies and governments 49 Programme and project appraisal guidelines 49 Early warning and information systems 49 Risk transfer mechanisms 51 International initiatives and policy forums 51 Risk reduction performance targets and indicators for donors 52 6. Conclusions and recommendations 53 6.1 Main conclusions 53 6.2 Recommendations 54 Core recommendation …",
"title": ""
},
{
"docid": "4a9a53444a74f7125faa99d58a5b0321",
"text": "The new transformed read-write Web has resulted in a rapid growth of user generated content on the Web resulting into a huge volume of unstructured data. A substantial part of this data is unstructured text such as reviews and blogs. Opinion mining and sentiment analysis (OMSA) as a research discipline has emerged during last 15 years and provides a methodology to computationally process the unstructured data mainly to extract opinions and identify their sentiments. The relatively new but fast growing research discipline has changed a lot during these years. This paper presents a scientometric analysis of research work done on OMSA during 20 0 0–2016. For the scientometric mapping, research publications indexed in Web of Science (WoS) database are used as input data. The publication data is analyzed computationally to identify year-wise publication pattern, rate of growth of publications, types of authorship of papers on OMSA, collaboration patterns in publications on OMSA, most productive countries, institutions, journals and authors, citation patterns and an year-wise citation reference network, and theme density plots and keyword bursts in OMSA publications during the period. A somewhat detailed manual analysis of the data is also performed to identify popular approaches (machine learning and lexicon-based) used in these publications, levels (document, sentence or aspect-level) of sentiment analysis work done and major application areas of OMSA. The paper presents a detailed analytical mapping of OMSA research work and charts the progress of discipline on various useful parameters. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "abc160fc578bb40935afa7aea93cf6ca",
"text": "This study investigates the effect of leader and follower behavior on employee voice, team task responsibility and team effectiveness. This study distinguishes itself by including both leader and follower behavior as predictors of team effectiveness. In addition, employee voice and team task responsibility are tested as potential mediators of the relationship between task-oriented behaviors (informing, directing, verifying) and team effectiveness as well as the relationship between relation-oriented behaviors (positive feedback, intellectual stimulation, individual consideration) and team effectiveness. This cross-sectional exploratory study includes four methods: 1) inter-reliable coding of leader and follower behavior during staff meetings; 2) surveys of 57 leaders; 3) surveys of643 followers; 4) survey of 56 lean coaches. Regression analyses showed that both leaders and followers display more task-oriented behaviors opposed to relation-oriented behaviors during staff meetings. Contrary to the hypotheses, none of the observed leader behaviors positively influences employee voice, team task responsibility or team effectiveness. However, all three task-oriented follower behaviors indirectly influence team effectiveness. The findings from this research illustrate that follower behaviors has more influence on team effectiveness compared to leader behavior. Practical implications, strengths and limitations of the research are discussed. Moreover, future research directions including the mediating role of culture and psychological safety are proposed as well.",
"title": ""
},
{
"docid": "e97c0bbb74534a16c41b4a717eed87d5",
"text": "This paper is discussing about the road accident severity survey using data mining, where different approaches have been considered. We have collected research work carried out by different researchers based on road accidents. Article describing the review work in context of road accident case’s using data mining approach. The article is consisting of collections of methods in different scenario with the aim to resolve the road accident. Every method is somewhere seeming to productive in some ways to decrease the no of causality. It will give a better edge to different country where the no of accidents is leading to fatality of life.",
"title": ""
},
{
"docid": "7539a738cad3a36336dc7019e2aabb21",
"text": "In this paper a compact antenna for ultrawideband applications is presented. The antenna is based on the biconical antenna design and has two identical elements. Each element is composed of a cone extended with a ring and an inner cylinder. The modification of the well-known biconical structure is made in order to reduce the influence of the radiation of the feeding cable. To obtain the optimum parameters leading to a less impact of the cable effect on the antenna performance, during the optimization process the antenna was coupled with a feeding coaxial cable. The proposed antenna covers the frequency range from 1.5 to 41 GHz with voltage standing wave ratio below 2 and has an omnidirectional radiation pattern. The realized total efficiency is above 85 % which indicates a good performance.",
"title": ""
},
{
"docid": "a87ba6d076c3c05578a6f6d9da22ac79",
"text": "Here we review and extend a new unitary model for the pathophysiology of involutional osteoporosis that identifies estrogen (E) as the key hormone for maintaining bone mass and E deficiency as the major cause of age-related bone loss in both sexes. Also, both E and testosterone (T) are key regulators of skeletal growth and maturation, and E, together with GH and IGF-I, initiate a 3- to 4-yr pubertal growth spurt that doubles skeletal mass. Although E is required for the attainment of maximal peak bone mass in both sexes, the additional action of T on stimulating periosteal apposition accounts for the larger size and thicker cortices of the adult male skeleton. Aging women undergo two phases of bone loss, whereas aging men undergo only one. In women, the menopause initiates an accelerated phase of predominantly cancellous bone loss that declines rapidly over 4-8 yr to become asymptotic with a subsequent slow phase that continues indefinitely. The accelerated phase results from the loss of the direct restraining effects of E on bone turnover, an action mediated by E receptors in both osteoblasts and osteoclasts. In the ensuing slow phase, the rate of cancellous bone loss is reduced, but the rate of cortical bone loss is unchanged or increased. This phase is mediated largely by secondary hyperparathyroidism that results from the loss of E actions on extraskeletal calcium metabolism. The resultant external calcium losses increase the level of dietary calcium intake that is required to maintain bone balance. Impaired osteoblast function due to E deficiency, aging, or both also contributes to the slow phase of bone loss. Although both serum bioavailable (Bio) E and Bio T decline in aging men, Bio E is the major predictor of their bone loss. Thus, both sex steroids are important for developing peak bone mass, but E deficiency is the major determinant of age-related bone loss in both sexes.",
"title": ""
},
{
"docid": "296705d6bfc09f58c8e732a469b17871",
"text": "Computer security incident response teams (CSIRTs) respond to a computer security incident when the need arises. Failure of these teams can have far-reaching effects for the economy and national security. CSIRTs often have to work on an ad hoc basis, in close cooperation with other teams, and in time constrained environments. It could be argued that under these working conditions CSIRTs would be likely to encounter problems. A needs assessment was done to see to which extent this argument holds true. We constructed an incident response needs model to assist in identifying areas that require improvement. We envisioned a model consisting of four assessment categories: Organization, Team, Individual and Instrumental. Central to this is the idea that both problems and needs can have an organizational, team, individual, or technical origin or a combination of these levels. To gather data we conducted a literature review. This resulted in a comprehensive list of challenges and needs that could hinder or improve, respectively, the performance of CSIRTs. Then, semi-structured in depth interviews were held with team coordinators and team members of five public and private sector Dutch CSIRTs to ground these findings in practice and to identify gaps between current and desired incident handling practices. This paper presents the findings of our needs assessment and ends with a discussion of potential solutions to problems with performance in incident response.",
"title": ""
},
{
"docid": "ac57fab046cfd02efa1ece262b07492f",
"text": "Interactive Narrative is an approach to interactive entertainment that enables the player to make decisions that directly affect the direction and/or outcome of the narrative experience being delivered by the computer system. Interactive narrative requires two seemingly conflicting requirements: coherent narrative and user agency. We present an interactive narrative system that uses a combination of narrative control and autonomous believable character agents to augment a story world simulation in which the user has a high degree of agency with narrative plot control. A drama manager called the Automated Story Director gives plot-based guidance to believable agents. The believable agents are endowed with the autonomy necessary to carry out directives in the most believable fashion possible. Agents also handle interaction with the user. When the user performs actions that change the world in such a way that the Automated Story Director can no longer drive the intended narrative forward, it is able to adapt the plot to incorporate the user’s changes and still achieve",
"title": ""
}
] |
scidocsrr
|
16880cc223b10e55afce93c0630e34b5
|
Scheduling techniques for hybrid circuit/packet networks
|
[
{
"docid": "8fcc8c61dd99281cfda27bbad4b7623a",
"text": "Modern data centers are massive, and support a range of distributed applications across potentially hundreds of server racks. As their utilization and bandwidth needs continue to grow, traditional methods of augmenting bandwidth have proven complex and costly in time and resources. Recent measurements show that data center traffic is often limited by congestion loss caused by short traffic bursts. Thus an attractive alternative to adding physical bandwidth is to augment wired links with wireless links in the 60 GHz band.\n We address two limitations with current 60 GHz wireless proposals. First, 60 GHz wireless links are limited by line-of-sight, and can be blocked by even small obstacles. Second, even beamforming links leak power, and potential interference will severely limit concurrent transmissions in dense data centers. We propose and evaluate a new wireless primitive for data centers, 3D beamforming, where 60 GHz signals bounce off data center ceilings, thus establishing indirect line-of-sight between any two racks in a data center. We build a small 3D beamforming testbed to demonstrate its ability to address both link blockage and link interference, thus improving link range and number of concurrent transmissions in the data center. In addition, we propose a simple link scheduler and use traffic simulations to show that these 3D links significantly expand wireless capacity compared to their 2D counterparts.",
"title": ""
}
] |
[
{
"docid": "5b6daefbefd44eea4e317e673ad91da3",
"text": "A three-dimensional (3-D) thermogram can provide spatial information; however, it is rarely applied because it lacks an accurate method in obtaining the intrinsic and extrinsic parameters of an infrared (IR) camera. Conventional methods cannot be used for such calibration because an IR camera cannot capture visible calibration patterns. Therefore, in the current study, a trinocular vision system composed of two visible cameras and an IR camera is constructed and a calibration board with miniature bulbs is designed. The two visible cameras compose a binocular vision system that obtains 3-D information from the miniature bulbs while the IR camera captures the calibration board to obtain the two dimensional subpixel coordinates of miniature bulbs. The corresponding algorithm is proposed to calibrate the IR camera based on the gathered information. Experimental results show that the proposed calibration can accurately obtain the intrinsic and extrinsic parameters of the IR camera, and meet the requirements of its application.",
"title": ""
},
{
"docid": "74f674ddfd04959303bb89bd6ef22b66",
"text": "Ethernet is the survivor of the LAN wars. It is hard to find an IP packet that has not passed over an Ethernet segment. One important reason for this is Ethernet's simplicity and ease of configuration. However, Ethernet has always been known to be an insecure technology. Recent successful malware attacks and the move towards cloud computing in data centers demand that attention be paid to the security aspects of Ethernet. In this paper, we present known Ethernet related threats and discuss existing solutions from business, hacker, and academic communities. Major issues, like insecurities related to Address Resolution Protocol and to self-configurability, are discussed. The solutions fall roughly into three categories: accepting Ethernet's insecurity and circling it with firewalls; creating a logical separation between the switches and end hosts; and centralized cryptography based schemes. However, none of the above provides the perfect combination of simplicity and security befitting Ethernet.",
"title": ""
},
{
"docid": "9868b2a338911071e5e0553d6aa87eb7",
"text": "This paper reports on a workshop in June 2007 on the topic of the insider threat. Attendees represented academia and research institutions, consulting firms, industry—especially the financial services sector, and government. Most participants were from the United States. Conventional wisdom asserts that insiders account for roughly a third of the computer security loss. Unfortunately, there is currently no way to validate or refute that assertion, because data on the insider threat problem is meager at best. Part of the reason so little data exists on the insider threat problem is that the concepts of insider and insider threat are not consistently defined. Consequently, it is hard to compare even the few pieces of insider threat data that do exist. Monitoring is a means of addressing the insider threat, although it is more successful to verify a case of suspected insider attack than it is to identify insider attacks. Monitoring has (negative) implications for personal privacy. However, companies generally have wide leeway to monitor the activity of their employees. Psychological profiling of potential insider attackers is appealing but may be hard to accomplish. More productive may be using psychological tools to promote positive behavior on the part of employees.",
"title": ""
},
{
"docid": "b200836d9046e79b61627122419d93c4",
"text": "Digital evidence plays a vital role in determining legal case admissibility in electronic- and cyber-oriented crimes. Considering the complicated level of the Internet of Things (IoT) technology, performing the needed forensic investigation will be definitely faced by a number of challenges and obstacles, especially in digital evidence acquisition and analysis phases. Based on the currently available network forensic methods and tools, the performance of IoT forensic will be producing a deteriorated digital evidence trail due to the sophisticated nature of IoT connectivity and data exchangeability via the “things”. In this paper, a revision of IoT digital evidence acquisition procedure is provided. In addition, an improved theoretical framework for IoT forensic model that copes with evidence acquisition issues is proposed and discussed.",
"title": ""
},
{
"docid": "e13b4b92c639a5b697356466e00e05c3",
"text": "In fashion retailing, the display of product inventory at the store is important to capture consumers’ attention. Higher inventory levels might allow more attractive displays and thus increase sales, in addition to avoiding stock-outs. We develop a choice model where product demand is indeed affected by inventory, and controls for product and store heterogeneity, seasonality, promotions and potential unobservable shocks in each market. We empirically test the model with daily traffic, inventory and sales data from a large retailer, at the store-day-product level. We find that the impact of inventory level on sales is positive and highly significant, even in situations of extremely high service level. The magnitude of this effect is large: each 1% increase in product-level inventory at the store increases sales of 0.58% on average. This supports the idea that inventory has a strong role in helping customers choose a particular product within the assortment. We finally describe how a retailer should optimally decide its inventory levels within a category and describe the properties of the optimal solution. Applying such optimization to our data set yields consistent and significant revenue improvements, of more than 10% for any date and store compared to current practices. Submitted: April 6, 2016. Revised: May 17, 2017",
"title": ""
},
{
"docid": "cc8e52fdb69a9c9f3111287905f02bfc",
"text": "We present a new methodology for exploring and analyzing navigation patterns on a web site. The patterns that can be analyzed consist of sequences of URL categories traversed by users. In our approach, we first partition site users into clusters such that users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model-based (as opposed to distance-based) and partitions users according to the order in which they request web pages. In particular, we cluster users by learning a mixture of first-order Markov models using the Expectation-Maximization algorithm. The runtime of our algorithm scales linearly with the number of clusters and with the size of the data; and our implementation easily handles hundreds of thousands of user sessions in memory. In the paper, we describe the details of our method and a visualization tool based on it called WebCANVAS. We illustrate the use of our approach on user-traffic data from msnbc.com.",
"title": ""
},
{
"docid": "acab6a0a8b5e268cd0a5416bd00b4f55",
"text": "We propose SocialFilter, a trust-aware collaborative spam mitigation system. Our proposal enables nodes with no email classification functionality to query the network on whether a host is a spammer. It employs Sybil-resilient trust inference to weigh the reports concerning spamming hosts that collaborating spam-detecting nodes (reporters) submit to the system. It weighs the spam reports according to the trustworthiness of their reporters to derive a measure of the system's belief that a host is a spammer. SocialFilter is the first collaborative unwanted traffic mitigation system that assesses the trustworthiness of spam reporters by both auditing their reports and by leveraging the social network of the reporters' administrators. The design and evaluation of our proposal offers us the following lessons: a) it is plausible to introduce Sybil-resilient Online-Social-Network-based trust inference mechanisms to improve the reliability and the attack-resistance of collaborative spam mitigation; b) using social links to obtain the trustworthiness of reports concerning spammers can result in comparable spam-blocking effectiveness with approaches that use social links to rate-limit spam (e.g., Ostra [27]); c) unlike Ostra, in the absence of reports that incriminate benign email senders, SocialFilter yields no false positives.",
"title": ""
},
{
"docid": "dfc383a057aa4124dfc4237e607c321a",
"text": "Obfuscation is applied to large quantities of benign and malicious JavaScript throughout the web. In situations where JavaScript source code is being submitted for widespread use, such as in a gallery of browser extensions (e.g., Firefox), it is valuable to require that the code submitted is not obfuscated and to check for that property. In this paper, we describe NOFUS, a static, automatic classifier that distinguishes obfuscated and non-obfuscated JavaScript with high precision. Using a collection of examples of both obfuscated and non-obfuscated JavaScript, we train NOFUS to distinguish between the two and show that the classifier has both a low false positive rate (about 1%) and low false negative rate (about 5%). Applying NOFUS to collections of deployed JavaScript, we show it correctly identifies obfuscated JavaScript files from Alexa top 50 websites. While prior work conflates obfuscation with maliciousness (assuming that detecting obfuscation implies maliciousness), we show that the correlation is weak. Yes, much malware is hidden using obfuscation, but so is benign JavaScript. Further, applying NOFUS to known JavaScript malware, we show our classifier finds 15% of the files are unobfuscated, showing that not all malware is obfuscated.",
"title": ""
},
{
"docid": "6b3db3006f8314559bbbe41620466c6e",
"text": "Segmentation of anatomical structures in medical images is often based on a voxel/pixel classification approach. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images that fosters categorization. We propose a novel system for voxel classification integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D image, respectively. We applied our method to the segmentation of tibial cartilage in low field knee MRI scans and tested it on 114 unseen scans. Although our method uses only 2D features at a single scale, it performs better than a state-of-the-art method using 3D multi-scale features. In the latter approach, the features and the classifier have been carefully adapted to the problem at hand. That we were able to get better results by a deep learning architecture that autonomously learns the features from the images is the main insight of this study.",
"title": ""
},
{
"docid": "e120320dbe8fa0e2475b96a0b07adec8",
"text": "BACKGROUND\nProne hip extension (PHE) is a common and widely accepted test used for assessment of the lumbo-pelvic movement pattern. Considerable increased in lumbar lordosis during this test has been considered as impairment of movement patterns in lumbo-pelvic region. The purpose of this study was to investigate the change of lumbar lordosis in PHE test in subjects with and without low back pain (LBP).\n\n\nMETHOD\nA two-way mixed design with repeated measurements was used to investigate the lumbar lordosis changes during PHE in two groups of subjects with and without LBP. An equal number of subjects (N = 30) were allocated to each group. A standard flexible ruler was used to measure the size of lumbar lordosis in prone-relaxed position and PHE test in each group.\n\n\nRESULT\nThe result of two-way mixed-design analysis of variance revealed significant health status by position interaction effect for lumbar lordosis (P < 0.001). The main effect of test position on lumbar lordosis was statistically significant (P < 0.001). The lumbar lordosis was significantly greater in the PHE compared to prone-relaxed position in both subjects with and without LBP. The amount of difference in positions was statistically significant between two groups (P < 0.001) and greater change in lumbar lordosis was found in the healthy group compared to the subjects with LBP.\n\n\nCONCLUSIONS\nGreater change in lumbar lordosis during this test may be due to more stiffness in lumbopelvic muscles in the individuals with LBP.",
"title": ""
},
{
"docid": "a3e8a50b38e276d19dc301fcf8818ea1",
"text": "Automated diagnosis of skin cancer is an active area of research with different classification methods proposed so far. However, classification models based on insufficient labeled training data can badly influence the diagnosis process if there is no self-advising and semi supervising capability in the model. This paper presents a semi supervised, self-advised learning model for automated recognition of melanoma using dermoscopic images. Deep belief architecture is constructed using labeled data together with unlabeled data, and fine tuning done by an exponential loss function in order to maximize separation of labeled data. In parallel a self-advised SVM algorithm is used to enhance classification results by counteracting the effect of misclassified data. To increase generalization capability and redundancy of the model, polynomial and radial basis function based SA-SVMs and Deep network are trained using training samples randomly chosen via a bootstrap technique. Then the results are aggregated using least square estimation weighting. The proposed model is tested on a collection of 100 dermoscopic images. The variation in classification error is analyzed with respect to the ratio of labeled and unlabeled data used in the training phase. The classification performance is compared with some popular classification methods and the proposed model using the deep neural processing outperforms most of the popular techniques including KNN, ANN, SVM and semi supervised algorithms like Expectation maximization and transductive SVM.",
"title": ""
},
{
"docid": "4ee6894fade929db82af9cb62fecc0f9",
"text": "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client’s contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients’ contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.",
"title": ""
},
{
"docid": "48d778934127343947b494fe51f56a33",
"text": "In this paper, we present a simple method for animating natural phenomena such as erosion, sedimentation, and acidic corrosion. We discretize the appropriate physical or chemical equations using finite differences, and we use the results to modify the shape of a solid body. We remove mass from an object by treating its surface as a level set and advecting it inward, and we deposit the chemical and physical byproducts into simulated fluid. Similarly, our technique deposits sediment onto a surface by advecting the level set outward. Our idea can be used for off-line high quality animations as well as interactive applications such as games, and we demonstrate both in this paper.",
"title": ""
},
{
"docid": "a07472c2f086332bf0f97806255cb9d5",
"text": "The Learning Analytics Dashboard (LAD) is an application to show students’ online behavior patterns in a virtual learning environment. This supporting tool works by tracking students’ log-files, mining massive amounts of data to find meaning, and visualizing the results so they can be comprehended at a glance. This paper reviews previously developed applications to analyze their features. Based on the implications from the review of previous studies as well as a preliminary investigation on the need for such tools, an early version of the LAD was designed and developed. Also, in order to improve the LAD, a usability test incorporating a stimulus recall interview was conducted with 38 college students in two blended learning classes. Evaluation of this tool was performed in an experimental research setting with a control group and additional surveys were conducted asking students’ about perceived usefulness, conformity, level of understanding of graphs, and their behavioral changes. The results indicated that this newly developed learning analytics tool did not significantly impact on their learning achievement. However, lessons learned from the usability and pilot tests support that visualized information impacts on students’ understanding level; and the overall satisfaction with dashboard plays as a covariant that impacts on both the degree of understanding and students’ perceived change of behavior. Taking in the results of the tests and students’ openended responses, a scaffolding strategy to help them understand the meaning of the information displayed was included in each sub section of the dashboard. Finally, this paper discusses future directions in regard to improving LAD so that it better supports students’ learning performance, which might be helpful for those who develop learning analytics applications for students.",
"title": ""
},
{
"docid": "67b5bd59689c325365ac765a17886169",
"text": "L-Systems have traditionally been used as a popular method for the modelling of spacefilling curves, biological systems and morphogenesis. In this paper, we adapt string rewriting grammars based on L-Systems into a system for music composition. Representation of pitch, duration and timbre are encoded as grammar symbols, upon which a series of re-writing rules are applied. Parametric extensions to the grammar allow the specification of continuous data for the purposes of modulation and control. Such continuous data is also under control of the grammar. Using non-deterministic grammars with context sensitivity allows the simulation of Nth-order Markov models with a more economical representation than transition matrices and greater flexibility than previous composition models based on finite state automata or Petri nets. Using symbols in the grammar to represent relationships between notes, (rather than absolute notes) in combination with a hierarchical grammar representation, permits the emergence of complex music compositions from a relatively simple grammars.",
"title": ""
},
{
"docid": "81ca5239dbd60a988e7457076aac05d7",
"text": "OBJECTIVE\nFrontline health professionals need a \"red flag\" tool to aid their decision making about whether to make a referral for a full diagnostic assessment for an autism spectrum condition (ASC) in children and adults. The aim was to identify 10 items on the Autism Spectrum Quotient (AQ) (Adult, Adolescent, and Child versions) and on the Quantitative Checklist for Autism in Toddlers (Q-CHAT) with good test accuracy.\n\n\nMETHOD\nA case sample of more than 1,000 individuals with ASC (449 adults, 162 adolescents, 432 children and 126 toddlers) and a control sample of 3,000 controls (838 adults, 475 adolescents, 940 children, and 754 toddlers) with no ASC diagnosis participated. Case participants were recruited from the Autism Research Centre's database of volunteers. The control samples were recruited through a variety of sources. Participants completed full-length versions of the measures. The 10 best items were selected on each instrument to produce short versions.\n\n\nRESULTS\nAt a cut-point of 6 on the AQ-10 adult, sensitivity was 0.88, specificity was 0.91, and positive predictive value (PPV) was 0.85. At a cut-point of 6 on the AQ-10 adolescent, sensitivity was 0.93, specificity was 0.95, and PPV was 0.86. At a cut-point of 6 on the AQ-10 child, sensitivity was 0.95, specificity was 0.97, and PPV was 0.94. At a cut-point of 3 on the Q-CHAT-10, sensitivity was 0.91, specificity was 0.89, and PPV was 0.58. Internal consistency was >0.85 on all measures.\n\n\nCONCLUSIONS\nThe short measures have potential to aid referral decision making for specialist assessment and should be further evaluated.",
"title": ""
},
{
"docid": "99a4fc6540802ff820fef9ca312cdc1c",
"text": "Problem diagnosis is one crucial aspect in the cloud operation that is becoming increasingly challenging. On the one hand, the volume of logs generated in today's cloud is overwhelmingly large. On the other hand, cloud architecture becomes more distributed and complex, which makes it more difficult to troubleshoot failures. In order to address these challenges, we have developed a tool, called LOGAN, that enables operators to quickly identify the log entries that potentially lead to the root cause of a problem. It constructs behavioral reference models from logs that represent the normal patterns. When problem occurs, our tool enables operators to inspect the divergence of current logs from the reference model and highlight logs likely to contain the hints to the root cause. To support these capabilities we have designed and developed several mechanisms. First, we developed log correlation algorithms using various IDs embedded in logs to help identify and isolate log entries that belong to the failed request. Second, we provide efficient log comparison to help understand the differences between different executions. Finally we designed mechanisms to highlight critical log entries that are likely to contain information pertaining to the root cause of the problem. We have implemented the proposed approach in a popular cloud management system, OpenStack, and through case studies, we demonstrate this tool can help operators perform problem diagnosis quickly and effectively.",
"title": ""
},
{
"docid": "211037c38a50ff4169f3538c3b6af224",
"text": "In this paper we present a method to obtain a depth map from a single image of a scene by exploiting both image content and user interaction. Assuming that regions with low gradients will have similar depth values, we formulate the problem as an optimization process across a graph, where pixels are considered as nodes and edges between neighbouring pixels are assigned weights based on the image gradient. Starting from a number of userdefined constraints, depth values are propagated between highly connected nodes i.e. with small gradients. Such constraints include, for example, depth equalities and inequalities between pairs of pixels, and may include some information about perspective. This framework provides a depth map of the scene, which is useful for a number of applications.",
"title": ""
},
{
"docid": "5d0a77058d6b184cb3c77c05363c02e0",
"text": "For two-class discrimination, Ref. [1] claimed that, when covariance matrices of the two classes were unequal, a (class) unbalanced dataset had a negative effect on the performance of linear discriminant analysis (LDA). Through re-balancing 10 realworld datasets, Ref. [1] provided empirical evidence to support the claim using AUC (Area Under the receiver operating characteristic Curve) as the performance metric. We suggest that such a claim is vague if not misleading, there is no solid theoretical analysis presented in [1], and AUC can lead to a quite different conclusion from that led to by misclassification error rate (ER) on the discrimination performance of LDA for unbalanced datasets. Our empirical and simulation studies suggest that, for LDA, the increase of the median of AUC (and thus the improvement of performance of LDA) from re-balancing is relatively small, while, in contrast, the increase of the median of ER (and thus the decline in performance of LDA) from re-balancing is relatively large. Therefore, from our study, there is no reliable empirical evidence to support the claim that a (class) unbalanced data set has a negative effect on the performance of LDA. In addition, re-balancing affects the performance of LDA for datasets with either equal or unequal covariance matrices, indicating that having unequal covariance matrices is not a key reason for the difference in performance between original and re-balanced data.",
"title": ""
},
{
"docid": "dfd88750bc1d42e8cc798d2097426910",
"text": "Melanoma is one of the most lethal forms of skin cancer. It occurs on the skin surface and develops from cells known as melanocytes. The same cells are also responsible for benign lesions commonly known as moles, which are visually similar to melanoma in its early stage. If melanoma is treated correctly, it is very often curable. Currently, much research is concentrated on the automated recognition of melanomas. In this paper, we propose an automated melanoma recognition system, which is based on deep learning method combined with so called hand-crafted RSurf features and Local Binary Patterns. The experimental evaluation on a large publicly available dataset demonstrates high classification accuracy, sensitivity, and specificity of our proposed approach when it is compared with other classifiers on the same dataset.",
"title": ""
}
] |
scidocsrr
|
2bc52851ed031568051d058b56e6d924
|
Path loss models for 5G millimeter wave propagation channels in urban microcells
|
[
{
"docid": "e541be7c81576fdef564fd7eba5d67dd",
"text": "As the cost of massively broadband® semiconductors continue to be driven down at millimeter wave (mm-wave) frequencies, there is great potential to use LMDS spectrum (in the 28-38 GHz bands) and the 60 GHz band for cellular/mobile and peer-to-peer wireless networks. This work presents urban cellular and peer-to-peer RF wideband channel measurements using a broadband sliding correlator channel sounder and steerable antennas at carrier frequencies of 38 GHz and 60 GHz, and presents measurements showing the propagation time delay spread and path loss as a function of separation distance and antenna pointing angles for many types of real-world environments. The data presented here show that at 38 GHz, unobstructed Line of Site (LOS) channels obey free space propagation path loss while non-LOS (NLOS) channels have large multipath delay spreads and can exploit many different pointing angles to provide propagation links. At 60 GHz, there is notably more path loss, smaller delay spreads, and fewer unique antenna angles for creating a link. For both 38 GHz and 60 GHz, we demonstrate empirical relationships between the RMS delay spread and antenna pointing angles, and observe that excess path loss (above free space) has an inverse relationship with transmitter-to-receiver separation distance.",
"title": ""
},
{
"docid": "c949e051cbfd9cff13d939a7b594e6e6",
"text": "Propagation measurements at 28 GHz were conducted in outdoor urban environments in New York City using four different transmitter locations and 83 receiver locations with distances of up to 500 m. A 400 mega- chip per second channel sounder with steerable 24.5 dBi horn antennas at the transmitter and receiver was used to measure the angular distributions of received multipath power over a wide range of propagation distances and urban settings. Measurements were also made to study the small-scale fading of closely-spaced power delay profiles recorded at half-wavelength (5.35 mm) increments along a small-scale linear track (10 wavelengths, or 107 mm) at two different receiver locations. Our measurements indicate that power levels for small- scale fading do not significantly fluctuate from the mean power level at a fixed angle of arrival. We propose here a new lobe modeling technique that can be used to create a statistical channel model for lobe path loss and shadow fading, and we provide many model statistics as a function of transmitter- receiver separation distance. Our work shows that New York City is a multipath-rich environment when using highly directional steerable horn antennas, and that an average of 2.5 signal lobes exists at any receiver location, where each lobe has an average total angle spread of 40.3° and an RMS angle spread of 7.8°. This work aims to create a 28 GHz statistical spatial channel model for future 5G cellular networks.",
"title": ""
},
{
"docid": "292981db9a4f16e4ba7e02303cbee6c1",
"text": "The millimeter wave frequency spectrum offers unprecedented bandwidths for future broadband cellular networks. This paper presents the world's first empirical measurements for 28 GHz outdoor cellular propagation in New York City. Measurements were made in Manhattan for three different base station locations and 75 receiver locations over distances up to 500 meters. A 400 megachip-per-second channel sounder and directional horn antennas were used to measure propagation characteristics for future mm-wave cellular systems in urban environments. This paper presents measured path loss as a function of the transmitter - receiver separation distance, the angular distribution of received power using directional 24.5 dBi antennas, and power delay profiles observed in New York City. The measured data show that a large number of resolvable multipath components exist in both non line of sight and line of sight environments, with observed multipath excess delay spreads (20 dB) as great as 1388.4 ns and 753.5 ns, respectively. The widely diverse spatial channels observed at any particular location suggest that millimeter wave mobile communication systems with electrically steerable antennas could exploit resolvable multipath components to create viable links for cell sizes on the order of 200 m.",
"title": ""
}
] |
[
{
"docid": "6b0f5ddb7be84cf9043b23f1141699b4",
"text": "We show that even when face images are unconstrained and arbitrarily paired, face swapping between them is quite simple. To this end, we make the following contributions. (a) Instead of tailoring systems for face segmentation, as others previously proposed, we show that a standard fully convolutional network (FCN) can achieve remarkably fast and accurate segmentations, provided that it is trained on a rich enough example set. For this purpose, we describe novel data collection and generation routines which provide challenging segmented face examples. (b) We use our segmentations for robust face swapping under unprecedented conditions. (c) Unlike previous work, our swapping is robust enough to allow for extensive quantitative tests. To this end, we use the Labeled Faces in the Wild (LFW) benchmark and measure the effect of intra- and inter-subject face swapping on recognition. We show that our intra-subject swapped faces remain as recognizable as their sources, testifying to the effectiveness of our method. In line with established perceptual studies, we show that better face swapping produces less recognizable inter-subject results. This is the first time this effect was quantitatively demonstrated by machine vision systems.",
"title": ""
},
{
"docid": "799ccd75d6781e38cf5e2faee5784cae",
"text": "Recurrent neural networks (RNNs) form an important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets – Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.",
"title": ""
},
{
"docid": "46adb7a040a2d8a40910a9f03825588d",
"text": "The aim of this study was to investigate the consequences of friend networking sites (e.g., Friendster, MySpace) for adolescents' self-esteem and well-being. We conducted a survey among 881 adolescents (10-19-year-olds) who had an online profile on a Dutch friend networking site. Using structural equation modeling, we found that the frequency with which adolescents used the site had an indirect effect on their social self-esteem and well-being. The use of the friend networking site stimulated the number of relationships formed on the site, the frequency with which adolescents received feedback on their profiles, and the tone (i.e., positive vs. negative) of this feedback. Positive feedback on the profiles enhanced adolescents' social self-esteem and well-being, whereas negative feedback decreased their self-esteem and well-being.",
"title": ""
},
{
"docid": "e601c68a6118139c1183ba4abd012183",
"text": "Robert M. Golub, MD, Editor The JAMA Patient Page is a public service of JAMA. The information and recommendations appearing on this page are appropriate in most instances, but they are not a substitute for medical diagnosis. For specific information concerning your personal medical condition, JAMA suggests that you consult your physician. This page may be photocopied noncommercially by physicians and other health care professionals to share with patients. To purchase bulk reprints, call 312/464-0776. C H IL D H E A TH The Journal of the American Medical Association",
"title": ""
},
{
"docid": "9cf4d68ab09e98cd5b897308c8791d26",
"text": "Gesture Recognition Technology has evolved greatly over the years. The past has seen the contemporary Human – Computer Interface techniques and their drawbacks, which limit the speed and naturalness of the human brain and body. As a result gesture recognition technology has developed since the early 1900s with a view to achieving ease and lessening the dependence on devices like keyboards, mice and touchscreens. Attempts have been made to combine natural gestures to operate with the technology around us to enable us to make optimum use of our body gestures making our work faster and more human friendly. The present has seen huge development in this field ranging from devices like virtual keyboards, video game controllers to advanced security systems which work on face, hand and body recognition techniques. The goal is to make full use of the movements of the body and every angle made by the parts of the body in order to supplement technology to become human friendly and understand natural human behavior and gestures. The future of this technology is very bright with prototypes of amazing devices in research and development to make the world equipped with digital information at hand whenever and wherever required.",
"title": ""
},
{
"docid": "76383091c5eb5acd0976c41dc25cc0b2",
"text": "(2003). Towards a taxonomy of a set of discourse markers in dialog: a theoretical and computational linguistic account. Abstract Discourse markers are verbal and non-verbal devices that mark transition points in communication. They presumably facilitate the construction of a mental representation of the events described by the discourse. A taxonomy of these relational markers is one important beginning in investigations of language use. While several taxonomies of coherence relations have been proposed for monolog, only a few have been proposed for dialog. This paper presents a taxonomy of between-turn coherence relations in dialog and discusses several issues that arise out of constructing such a taxonomy. A large number of discourse markers was sampled from the Santa Barbara Corpus of Spoken American English. Two judges substituted each type of these markers for all other markers. This extensive substitution test determined whether hyponymous, hypernymous and synonymous relations existed between the markers from this corpus of dialogs. Evidence is presented for clustering coherence relations into four categories: direction, polarity, acceptance and empathics. language is the act of communication that normally is coordinated between its participants. The speaker or writer of a message needs to coordinate when to say what, what to say to whom, how and why to say it. In writing this is often difficult because the hearer is not simultaneously present in the communicative act 1. In dialog, speakers have the advantage that hearers are present They know whether they have the hearer \" s attention, whom they are talking to, when they can start and stop speaking, and what they can say. Hearers generally give clues on each of these aspects by providing feedback. Coordination between speakers and hearers consists of multifaceted tasks between the parties involved. For instance, speakers need to monitor whether hearers are attending to what is said (is the hearer making eye contact?), who they are talking to (is the hearer an authority?), when they are speaking (is there a pause in the conversation which allows the speaker to start speaking?), what to say (how to express a meaningful information?), and whether the speaker needs to follow up on an earlier piece of information (is there anything that is by convention expected from the speaker based on previous pieces of information). This makes dialog a very dynamic act of coordination. Take for instance the following dialog in the Santa Barbara Corpus for Spoken American English (SBSAE) …",
"title": ""
},
{
"docid": "f3c20a2a0694f54d2d8c9295348fb143",
"text": "Vector-based word representations have made great progress on many Natural Language Processing tasks. However, due to the lack of sentiment information, the traditional word vectors are insufficient to settle sentiment analysis tasks. In order to capture the sentiment information, we extended Continuous Skip-gram model (Skip-gram) and presented two sentiment word embedding models by integrating sentiment information into semantic word representations. Experimental results showed that the sentiment word embeddings learned by two models indeed capture sentiment and semantic information as well. Moreover, the proposed sentiment word embedding models outperform traditional word vectors on both Chinese and English corpora.",
"title": ""
},
{
"docid": "5495ed83b98364af094efa735b391ff1",
"text": "In this review we integrate results of long term experimental study on ant ”language” and intelligence which were fully based on fundamental ideas of Information Theory, such as the Shannon entropy, the Kolmogorov complexity, and the Shannon’s equation connecting the length of a message (l) and its frequency (p), i.e. l = − log p for rational communication systems. This approach, new for studying biological communication systems, enabled us to obtain the following important results on ants’ communication and intelligence: i) to reveal ”distant homing” in ants, that is, their ability to transfer information about remote events; ii) to estimate the rate of information transmission; iii) to reveal that ants are able to grasp regularities and to use them for ”compression” of information; iv) to reveal that ants are able to transfer to each other the information about the number of objects; v) to discover that ants can add and subtract small numbers. The obtained results show that Information Theory is not only wonderful mathematical theory, but many its results may be considered as Nature laws.",
"title": ""
},
{
"docid": "b374975ae9690f96ed750a888713dbc9",
"text": "We present a method for densely computing local spherical histograms of oriented gradients (SHOG) in volumetric images. The descriptors are based on the continuous representation of the orientation histograms in the harmonic domain, which we compute very efficiently via spherical tensor products and the fast Fourier transformation. Building upon these local spherical histogram representations, we utilize the Harmonic Filter to create a generic rotation invariant object detection system that benefits from both the highly discriminative representation of local image patches in terms of histograms of oriented gradients and an adaptable trainable voting scheme that forms the filter. We exemplarily demonstrate the effectiveness of such dense spherical 3D descriptors in a detection task on biological 3D images. In a direct comparison to existing approaches, our new filter reveals superior performance.",
"title": ""
},
{
"docid": "acb41ecca590ed8bc53b7af46a280daf",
"text": "We consider the problem of state estimation for a dynamic system driven by unobserved, correlated inputs. We model these inputs via an uncertain set of temporally correlated dynamic models, where this uncertainty includes the number of modes, their associated statistics, and the rate of mode transitions. The dynamic system is formulated via two interacting graphs: a hidden Markov model (HMM) and a linear-Gaussian state space model. The HMM's state space indexes system modes, while its outputs are the unobserved inputs to the linear dynamical system. This Markovian structure accounts for temporal persistence of input regimes, but avoids rigid assumptions about their detailed dynamics. Via a hierarchical Dirichlet process (HDP) prior, the complexity of our infinite state space robustly adapts to new observations. We present a learning algorithm and computational results that demonstrate the utility of the HDP for tracking, and show that it efficiently learns typical dynamics from noisy data.",
"title": ""
},
{
"docid": "a428e9ce70da1a0be87912367809980d",
"text": "This paper presents a study on bow tie antenna with U-shape slot, designated for the Wireless applications. It is designed to work on a thick substrate (h = 1.6mm) with a high dielectric constant ( = 4.4) operating at three frequencies (2.4, 4 and 5GHz). Two U-shaped slots are introduced in the both sides of bow tie antenna in order to achieve a very low return loss at these frequencies. The antenna is simulated with HFSSv11 simulator. The proposed antenna can exhibit minimum return loss, Omni-directional radiation pattern, wide impedance bandwidth, VSWR<2, and the stronger current distribution throughout the antenna.",
"title": ""
},
{
"docid": "b863ab617b5c800fe570f579b2b12b11",
"text": "Bourdieu and Education: How Useful is Bourdieu's Theory for Researchers?",
"title": ""
},
{
"docid": "8a28f3ad78a77922fd500b805139de4b",
"text": "Sina Weibo is the most popular and fast growing microblogging social network in China. However, more and more spam messages are also emerging on Sina Weibo. How to detect these spam is essential for the social network security. While most previous studies attempt to detect the microblogging spam by identifying spammers, in this paper, we want to exam whether we can detect the spam by each single Weibo message, because we notice that more and more spam Weibos are posted by normal users or even popular verified users. We propose a Weibo spam detection method based on machine learning algorithm. In addition, different from most existing microblogging spam detection methods which are based on English microblogs, our method is designed to deal with the features of Chinese microblogs. Our extensive empirical study shows the effectiveness of our approach.",
"title": ""
},
{
"docid": "369cb3790a031d167deb5eb41e74e3ab",
"text": "Utterance classification is a critical pre-processing step for many speech understanding and dialog systems. In multi-user settings, one needs to first identify if an utterance is even directed at the system, followed by another level of classification to determine the intent of the user’s input. In this work, we propose RNN and LSTM models for both these tasks. We show how both models outperform baselines based on ngram-based language models (LMs), feedforward neural network LMs, and boosting classifiers. To deal with the high rate of singleton and out-of-vocabulary words in the data, we also investigate a word input encoding based on character ngrams, and show how this representation beats the standard one-hot vector word encoding. Overall, these proposed approaches achieve over 30% relative reduction in equal error rate compared to boosting classifier baseline on an ATIS utterance intent classification task, and over 3.9% absolute reduction in equal error rate compared to a the maximum entropy LM baseline of 27.0% on an addressee detection task. We find that RNNs work best when utterances are short, while LSTMs are best when utterances are longer.",
"title": ""
},
{
"docid": "8d5759855079e2ddaab2e920b93ca2a3",
"text": "In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the human-as-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable.",
"title": ""
},
{
"docid": "14fe7deaece11b3d4cd4701199a18599",
"text": "\"Natively unfolded\" proteins occupy a unique niche within the protein kingdom in that they lack ordered structure under conditions of neutral pH in vitro. Analysis of amino acid sequences, based on the normalized net charge and mean hydrophobicity, has been applied to two sets of proteins: small globular folded proteins and \"natively unfolded\" ones. The results show that \"natively unfolded\" proteins are specifically localized within a unique region of charge-hydrophobicity phase space and indicate that a combination of low overall hydrophobicity and large net charge represent a unique structural feature of \"natively unfolded\" proteins.",
"title": ""
},
{
"docid": "9a1d8c06cedb5c876515679088f55ab5",
"text": "A 5-axis hybrid computer numerical controlled machine was developed using a 2 degree of freedom spherical parallel mechanism known as the Agile Eye. The hybrid machine design consisted of a 3-axis serial gantry type structure, with the Agile Eye being placed at the end of the Z-Axis to allow for machining on inclined planes. A control system was designed that controlled two kinematic systems, the 3-axis serial kinematic system and the 2-axis parallel kinematic system. This paper details the design of the agile eye, its kinematic models and the integration of the agile eye mechanism to create a functional hybrid machine.",
"title": ""
},
{
"docid": "931a719037feac7a3addcdcf08312db3",
"text": "Automatic detection and recognition of road signs is an important component of automated driver assistance systems contributing to the safety of the drivers, pedestrians and vehicles. Despite significant research, the problem of detecting and recognizing road signs still remains challenging due to varying lighting conditions, complex backgrounds and different viewing angles. We present an effective and efficient method for detection and recognition of traffic signs from images. Detection is carried out by performing color based segmentation followed by application of Hough transform to find circles, triangles or rectangles. Recognition is carried out using three state-of-the-art feature matching techniques, SIFT, SURF and BRISK. The proposed system evaluated on a custom developed dataset reported promising detection and recognition results. A comparative analysis of the three descriptors reveal that while SIFT achieves the best recognition rates, BRISK is the most efficient of the three descriptors in terms of computation time.",
"title": ""
},
{
"docid": "826e01210bb9ce8171ed72043b4a304d",
"text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.",
"title": ""
}
] |
scidocsrr
|
4e3bac67202b90957932894c971ff95e
|
Towards native code offloading based MCC frameworks for multimedia applications: A survey
|
[
{
"docid": "677dea61996aa5d1461998c09ecc334f",
"text": "Mobile applications are becoming increasingly ubiquitous and provide ever richer functionality on mobile devices, while such applications drain increasingly more battery power of mobile devices. Offloading some parts of the application running on mobile devices onto remote servers/clouds is a promising approach to extend the battery life of mobile devices. However, as data transmission of offloading causes delay and energy costs for mobile devices, it is necessary to carefully design application partitioning/offloading schemes to weigh the benefits against the transmission delay and costs. Due to bandwidth fluctuations in the wireless environment, static partitionings in previous work are unsuitable for mobile platforms with a fixed bandwidth assumption, while dynamic partitionings result in high overhead of continuous partitioning for mobile devices. Therefore, we propose a novel partitioning scheme taking the bandwidth as a variable to improve static partitioning and avoid high costs of dynamic partitioning. Firstly, we construct application Object Relation Graphs (ORGs) by combining static analysis and dynamic profiling to propose partitioning optimization models. Then based on our novel executiontime and energy optimization partitioning models, we propose the Branch-and-Bound based Application Partitioning (BBAP) algorithm and Min-Cut based Greedy Application Partitioning (MCGAP) algorithm. BBAP is suited to finding the optimal partitioning solutions for small applications, while MCGAP is applicable to quickly obtaining suboptimal solutions for large-scale applications. Experimental results demonstrate that both algorithms can adapt to bandwidth fluctuations well, and significantly reduce application execution time and energy consumption by optimally distributing components between mobile devices and servers. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8869cab615e5182c7c03f074ead081f7",
"text": "This article introduces the principal concepts of multimedia cloud computing and presents a novel framework. We address multimedia cloud computing from multimedia-aware cloud (media cloud) and cloud-aware multimedia (cloud media) perspectives. First, we present a multimedia-aware cloud, which addresses how a cloud can perform distributed multimedia processing and storage and provide quality of service (QoS) provisioning for multimedia services. To achieve a high QoS for multimedia services, we propose a media-edge cloud (MEC) architecture, in which storage, central processing unit (CPU), and graphics processing unit (GPU) clusters are presented at the edge to provide distributed parallel processing and QoS adaptation for various types of devices.",
"title": ""
}
] |
[
{
"docid": "9973dab94e708f3b87d52c24b8e18672",
"text": "We show that two popular discounted reward natural actor-critics, NAC-LSTD and eNAC, follow biased estimates of the natural policy gradient. We derive the first unbiased discounted reward natural actor-critics using batch and iterative approaches to gradient estimation and prove their convergence to globally optimal policies for discrete problems and locally optimal policies for continuous problems. Finally, we argue that the bias makes the existing algorithms more appropriate for the average reward setting.",
"title": ""
},
{
"docid": "4ab8913fff86d8a737ed62c56fe2b39d",
"text": "This paper draws on the social and behavioral sciences in an endeavor to specify the nature and microfoundations of the capabilities necessary to sustain superior enterprise performance in an open economy with rapid innovation and globally dispersed sources of invention, innovation, and manufacturing capability. Dynamic capabilities enable business enterprises to create, deploy, and protect the intangible assets that support superior longrun business performance. The microfoundations of dynamic capabilities—the distinct skills, processes, procedures, organizational structures, decision rules, and disciplines—which undergird enterprise-level sensing, seizing, and reconfiguring capacities are difficult to develop and deploy. Enterprises with strong dynamic capabilities are intensely entrepreneurial. They not only adapt to business ecosystems, but also shape them through innovation and through collaboration with other enterprises, entities, and institutions. The framework advanced can help scholars understand the foundations of long-run enterprise success while helping managers delineate relevant strategic considerations and the priorities they must adopt to enhance enterprise performance and escape the zero profit tendency associated with operating in markets open to global competition. Copyright 2007 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "71333997a4f9f38de0b53697d7b7cff1",
"text": "Environmental sustainability of a supply chain depends on the purchasing strategy of the supply chain members. Most of the earlier models have focused on cost, quality, lead time, etc. issues but not given enough importance to carbon emission for supplier evaluation. Recently, there is a growing pressure on supply chain members for reducing the carbon emission of their supply chain. This study presents an integrated approach for selecting the appropriate supplier in the supply chain, addressing the carbon emission issue, using fuzzy-AHP and fuzzy multi-objective linear programming. Fuzzy AHP (FAHP) is applied first for analyzing the weights of the multiple factors. The considered factors are cost, quality rejection percentage, late delivery percentage, green house gas emission and demand. These weights of the multiple factors are used in fuzzy multi-objective linear programming for supplier selection and quota allocation. An illustration with a data set from a realistic situation is presented to demonstrate the effectiveness of the proposed model. The proposed approach can handle realistic situation when there is information vagueness related to inputs. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b803d626421c7e7eaf52635c58523e8f",
"text": "Force-directed algorithms are among the most flexible methods for calculating layouts of simple undirected graphs. Also known as spring embedders, such algorithms calculate the layout of a graph using only information contained within the structure of the graph itself, rather than relying on domain-specific knowledge. Graphs drawn with these algorithms tend to be aesthetically pleasing, exhibit symmetries, and tend to produce crossing-free layouts for planar graphs. In this survey we consider several classical algorithms, starting from Tutte’s 1963 barycentric method, and including recent scalable multiscale methods for large and dynamic graphs.",
"title": ""
},
{
"docid": "a50763db7b9c73ab5e29389d779c343d",
"text": "Near to real-time emotion recognition is a promising task for human-computer interaction (HCI) and human-robot interaction (HRI). Using knowledge about the user's emotions depends upon the possibility to extract information about users' emotions during HCI or HRI without explicitly asking users about the feelings they are experiencing. To be able to sense the user's emotions without interrupting the HCI, we present a new method applied to the emotional experience of the user for extracting semantic information from the autonomic nervous system (ANS) signals associated with emotions. We use the concepts of 1st person - where the subject consciously (and subjectively) extracts the semantic meaning of a given lived experience, (e.g. `I felt amused') - and 3rd person approach - where the experimenter interprets the semantic meaning of the subject's experience from a set of externally (and objectively) measured variables (e.g. galvanic skin response measures). Based on the 3rd person approach, our technique aims at psychologically interpreting physiological parameters (skin conductance and heart rate), and at producing a continuous extraction of the user's affective state during HCI or HRI. We also combine it with the 1st person approach measure which allows a tailored interpretation of the physiological measure closely related to the user own emotional experience",
"title": ""
},
{
"docid": "99bd908e217eb9f56c40abd35839e9b3",
"text": "How does the physical structure of an arithmetic expression affect the computational processes engaged in by reasoners? In handwritten arithmetic expressions containing both multiplications and additions, terms that are multiplied are often placed physically closer together than terms that are added. Three experiments evaluate the role such physical factors play in how reasoners construct solutions to simple compound arithmetic expressions (such as \"2 + 3 × 4\"). Two kinds of influence are found: First, reasoners incorporate the physical size of the expression into numerical responses, tending to give larger responses to more widely spaced problems. Second, reasoners use spatial information as a cue to hierarchical expression structure: More narrowly spaced subproblems within an expression tend to be solved first and tend to be multiplied. Although spatial relationships besides order are entirely formally irrelevant to expression semantics, reasoners systematically use these relationships to support their success with various formal properties.",
"title": ""
},
{
"docid": "9c25a2e343e9e259a9881fd13983c150",
"text": "Advances in cognitive, affective, and social neuroscience raise a host of new questions concerning the ways in which neuroscience can and should be used. These advances also challenge our intuitions about the nature of humans as moral and spiritual beings. Neuroethics is the new field that grapples with these issues. The present article surveys a number of applications of neuroscience to such diverse arenas as marketing, criminal justice, the military, and worker productivity. The ethical, legal, and societal effects of these applications are discussed. Less practical, but perhaps ultimately more consequential, is the impact of neuroscience on our worldview and our understanding of the human person.",
"title": ""
},
{
"docid": "10fd3a7acae83f698ad04c4d0f011600",
"text": "A continuous-rate digital clock and data recovery (CDR) with automatic frequency acquisition is presented. The proposed automatic frequency acquisition scheme implemented using a conventional bang-bang phase detector (BBPD) requires minimum additional hardware, is immune to input data transition density, and is applicable to subrate CDRs. A ring-oscillator-based two-stage fractional-N phase-locked loop (PLL) is used as a digitally controlled oscillator (DCO) to achieve wide frequency range, low noise, and to decouple the tradeoff between jitter transfer (JTRAN) bandwidth and ring oscillator noise suppression in conventional CDRs. The CDR is implemented using a digital D/PLL architecture to decouple JTRAN bandwidth from jitter tolerance (JTOL) corner frequency, eliminate jitter peaking, and remove JTRAN dependence on BBPD gain. Fabricated in a 65 nm CMOS process, the prototype CDR achieves error-free operation (BER <; 10-12) from 4 to 10.5 Gb/s with pseudorandom binary sequence (PRBS) data sequences ranging from PRBS7 to PRBS31. The proposed automatic frequency acquisition scheme always locks the CDR loop within 1000 ppm residual frequency error in worst case. At 10 Gb/s, the CDR consumes 22.5 mW power and achieves a recovered clock long-term jitter of 2.2 psrms/24.0 pspp with PRBS31 input data. The measured JTRAN bandwidth and JTOL corner frequencies are 0.2 and 9 MHz, respectively.",
"title": ""
},
{
"docid": "509fa5630ed7e3e7bd914fb474da5071",
"text": "Languages with rich type systems are beginning to employ a blend of type inference and type checking, so that the type inference engine is guided by programmer-supplied type annotations. In this paper we show, for the first time, how to combine the virtues of two well-established ideas: unification-based inference, and bidi-rectional propagation of type annotations. The result is a type system that conservatively extends Hindley-Milner, and yet supports both higher-rank types and impredicativity.",
"title": ""
},
{
"docid": "ed5185ea36f61a9216c6f0183b81d276",
"text": "Blockchain technology enables the creation of a decentralized environment where transactions and data are not under the control of any third party organization. Any transaction ever completed is recorded in a public ledger in a verifiable and permanent way. Based on blockchain technology, we propose a global higher education credit platform, named EduCTX. This platform is based on the concept of the European Credit Transfer and Accumulation System (ECTS). It constitutes a globally trusted, decentralized higher education credit and grading system that can offer a globally unified viewpoint for students and higher education institutions (HEIs), as well as for other potential stakeholders such as companies, institutions and organizations. As a proof of concept, we present a prototype implementation of the environment, based on the open-source Ark Blockchain Platform. Based on a globally distributed peer-to-peer network, EduCTX will process, manage and control ECTX tokens, which represent credits that students gain for completed courses such as ECTS. HEIs are the peers of the blockchain network. The platform is a first step towards a more transparent and technologically advanced form of higher education systems. The EduCTX platform represents the basis of the EduCTX initiative which anticipates that various HEIs would join forces in order to create a globally efficient, simplified and ubiquitous environment in order to avoid language and administrative barriers. Therefore we invite and encourage HEIs to join the EduCTX initiative and the EduCTX blockchain network.",
"title": ""
},
{
"docid": "7c0677ad61691beecd7f89d5c70f2b5b",
"text": "Bidirectional dc-dc converters (BDC) have recently received a lot of attention due to the increasing need to systems with the capability of bidirectional energy transfer between two dc buses. Apart from traditional application in dc motor drives, new applications of BDC include energy storage in renewable energy systems, fuel cell energy systems, hybrid electric vehicles (HEV) and uninterruptible power supplies (UPS). The fluctuation nature of most renewable energy resources, like wind and solar, makes them unsuitable for standalone operation as the sole source of power. A common solution to overcome this problem is to use an energy storage device besides the renewable energy resource to compensate for these fluctuations and maintain a smooth and continuous power flow to the load. As the most common and economical energy storage devices in medium-power range are batteries and super-capacitors, a dc-dc converter is always required to allow energy exchange between storage device and the rest of system. Such a converter must have bidirectional power flow capability with flexible control in all operating modes. In HEV applications, BDCs are required to link different dc voltage buses and transfer energy between them. For example, a BDC is used to exchange energy between main batteries (200-300V) and the drive motor with 500V dc link. High efficiency, lightweight, compact size and high reliability are some important requirements for the BDC used in such an application. BDCs also have applications in line-interactive UPS which do not use double conversion technology and thus can achieve higher efficiency. In a line-interactive UPS, the UPS output terminals are connected to the grid and therefore energy can be fed back to the inverter dc bus and charge the batteries via a BDC during normal mode. In backup mode, the battery feeds the inverter dc bus again via BDC but in reverse power flow direction. BDCs can be classified into non-isolated and isolated types. Non-isolated BDCs (NBDC) are simpler than isolated BDCs (IBDC) and can achieve better efficiency. However, galvanic isolation is required in many applications and mandated by different standards. The",
"title": ""
},
{
"docid": "1f752034b5307c0118d4156d0b95eab3",
"text": "Importance\nTherapy-related myeloid neoplasms are a potentially life-threatening consequence of treatment for autoimmune disease (AID) and an emerging clinical phenomenon.\n\n\nObjective\nTo query the association of cytotoxic, anti-inflammatory, and immunomodulating agents to treat patients with AID with the risk for developing myeloid neoplasm.\n\n\nDesign, Setting, and Participants\nThis retrospective case-control study and medical record review included 40 011 patients with an International Classification of Diseases, Ninth Revision, coded diagnosis of primary AID who were seen at 2 centers from January 1, 2004, to December 31, 2014; of these, 311 patients had a concomitant coded diagnosis of myelodysplastic syndrome (MDS) or acute myeloid leukemia (AML). Eighty-six cases met strict inclusion criteria. A case-control match was performed at a 2:1 ratio.\n\n\nMain Outcomes and Measures\nOdds ratio (OR) assessment for AID-directed therapies.\n\n\nResults\nAmong the 86 patients who met inclusion criteria (49 men [57%]; 37 women [43%]; mean [SD] age, 72.3 [15.6] years), 55 (64.0%) had MDS, 21 (24.4%) had de novo AML, and 10 (11.6%) had AML and a history of MDS. Rheumatoid arthritis (23 [26.7%]), psoriasis (18 [20.9%]), and systemic lupus erythematosus (12 [14.0%]) were the most common autoimmune profiles. Median time from onset of AID to diagnosis of myeloid neoplasm was 8 (interquartile range, 4-15) years. A total of 57 of 86 cases (66.3%) received a cytotoxic or an immunomodulating agent. In the comparison group of 172 controls (98 men [57.0%]; 74 women [43.0%]; mean [SD] age, 72.7 [13.8] years), 105 (61.0%) received either agent (P = .50). Azathioprine sodium use was observed more frequently in cases (odds ratio [OR], 7.05; 95% CI, 2.35- 21.13; P < .001). Notable but insignificant case cohort use among cytotoxic agents was found for exposure to cyclophosphamide (OR, 3.58; 95% CI, 0.91-14.11) followed by mitoxantrone hydrochloride (OR, 2.73; 95% CI, 0.23-33.0). Methotrexate sodium (OR, 0.60; 95% CI, 0.29-1.22), mercaptopurine (OR, 0.62; 95% CI, 0.15-2.53), and mycophenolate mofetil hydrochloride (OR, 0.66; 95% CI, 0.21-2.03) had favorable ORs that were not statistically significant. No significant association between a specific length of time of exposure to an agent and the drug's category was observed.\n\n\nConclusions and Relevance\nIn a large population with primary AID, azathioprine exposure was associated with a 7-fold risk for myeloid neoplasm. The control and case cohorts had similar systemic exposures by agent category. No association was found for anti-tumor necrosis factor agents. Finally, no timeline was found for the association of drug exposure with the incidence in development of myeloid neoplasm.",
"title": ""
},
{
"docid": "c451d86c6986fab1a1c4cd81e87e6952",
"text": "Large-scale is a trend in person re-identi- fication (re-id). It is important that real-time search be performed in a large gallery. While previous methods mostly focus on discriminative learning, this paper makes the attempt in integrating deep learning and hashing into one framework to evaluate the efficiency and accuracy for large-scale person re-id. We integrate spatial information for discriminative visual representation by partitioning the pedestrian image into horizontal parts. Specifically, Part-based Deep Hashing (PDH) is proposed, in which batches of triplet samples are employed as the input of the deep hashing architecture. Each triplet sample contains two pedestrian images (or parts) with the same identity and one pedestrian image (or part) of the different identity. A triplet loss function is employed with a constraint that the Hamming distance of pedestrian images (or parts) with the same identity is smaller than ones with the different identity. In the experiment, we show that the proposed PDH method yields very competitive re-id accuracy on the large-scale Market-1501 and Market-1501+500K datasets.",
"title": ""
},
{
"docid": "a1018c89d326274e4b71ffc42f4ebba2",
"text": "We describe a method for improving the classification of short text strings using a combination of labeled training data plus a secondary corpus of unlabeled but related longer documents. We show that such unlabeled background knowledge can greatly decrease error rates, particularly if the number of examples or the size of the strings in the training set is small. This is particularly useful when labeling text is a labor-intensive job and when there is a large amount of information available about a particular problem on the World Wide Web. Our approach views the task as one of information integration using WHIRL, a tool that combines database functionalities with techniques from the information-retrieval literature.",
"title": ""
},
{
"docid": "b770124e1e5a7b4161b7f00a9bf3916f",
"text": "In the biomedical domain large amount of text documents are unstructured information is available in digital text form. Text Mining is the method or technique to find for interesting and useful information from unstructured text. Text Mining is also an important task in medical domain. The technique uses for Information retrieval, Information extraction and natural language processing (NLP). Traditional approaches for information retrieval are based on key based similarity. These approaches are used to overcome these problems; Semantic text mining is to discover the hidden information from unstructured text and making relationships of the terms occurring in them. In the biomedical text, the text should be in the form of text which can be present in the books, articles, literature abstracts, and so forth. Most of information is stored in the text format, so in this paper we will focus on the role of ontology for semantic text mining by using WordNet. Specifically, we have presented a model for extracting concepts from text documents using linguistic ontology in the domain of medical.",
"title": ""
},
{
"docid": "e090bb879e35dbabc5b3c77c98cd6832",
"text": "Immunity of analog circuit blocks is becoming a major design risk. This paper presents an automated methodology to simulate the susceptibility of a circuit during the design phase. More specifically, we propose a CAD tool which determines the fail/pass criteria of a signal under direct power injection (DPI). This contribution describes the function of the tool which is validated by a LDO regulator.",
"title": ""
},
{
"docid": "585c589cdab52eaa63186a70ac81742d",
"text": "BACKGROUND\nThere has been a rapid increase in the use of technology-based activity trackers to promote behavior change. However, little is known about how individuals use these trackers on a day-to-day basis or how tracker use relates to increasing physical activity.\n\n\nOBJECTIVE\nThe aims were to use minute level data collected from a Fitbit tracker throughout a physical activity intervention to examine patterns of Fitbit use and activity and their relationships with success in the intervention based on ActiGraph-measured moderate to vigorous physical activity (MVPA).\n\n\nMETHODS\nParticipants included 42 female breast cancer survivors randomized to the physical activity intervention arm of a 12-week randomized controlled trial. The Fitbit One was worn daily throughout the 12-week intervention. ActiGraph GT3X+ accelerometer was worn for 7 days at baseline (prerandomization) and end of intervention (week 12). Self-reported frequency of looking at activity data on the Fitbit tracker and app or website was collected at week 12.\n\n\nRESULTS\nAdherence to wearing the Fitbit was high and stable, with a mean of 88.13% of valid days over 12 weeks (SD 14.49%). Greater adherence to wearing the Fitbit was associated with greater increases in ActiGraph-measured MVPA (binteraction=0.35, P<.001). Participants averaged 182.6 minutes/week (SD 143.9) of MVPA on the Fitbit, with significant variation in MVPA over the 12 weeks (F=1.91, P=.04). The majority (68%, 27/40) of participants reported looking at their tracker or looking at the Fitbit app or website once a day or more. Changes in Actigraph-measured MVPA were associated with frequency of looking at one's data on the tracker (b=-1.36, P=.07) but not significantly associated with frequency of looking at one's data on the app or website (P=.36).\n\n\nCONCLUSIONS\nThis is one of the first studies to explore the relationship between use of a commercially available activity tracker and success in a physical activity intervention. A deeper understanding of how individuals engage with technology-based trackers may enable us to more effectively use these types of trackers to promote behavior change.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov NCT02332876; https://clinicaltrials.gov/ct2/show/NCT02332876?term=NCT02332876 &rank=1 (Archived by WebCite at http://www.webcitation.org/6wplEeg8i).",
"title": ""
},
{
"docid": "7e32376722b669d592a4a97fc1d6bf89",
"text": "The main challenge in achieving good image morphs is to create a map that aligns corresponding image elements. Our aim is to help automate this often tedious task. We compute the map by optimizing the compatibility of corresponding warped image neighborhoods using an adaptation of structural similarity. The optimization is regularized by a thin-plate spline and may be guided by a few user-drawn points. We parameterize the map over a halfway domain and show that this representation offers many benefits. The map is able to treat the image pair symmetrically, model simple occlusions continuously, span partially overlapping images, and define extrapolated correspondences. Moreover, it enables direct evaluation of the morph in a pixel shader without mesh rasterization. We improve the morphs by optimizing quadratic motion paths and by seamlessly extending content beyond the image boundaries. We parallelize the algorithm on a GPU to achieve a responsive interface and demonstrate challenging morphs obtained with little effort.",
"title": ""
},
{
"docid": "ef584ca8b3e9a7f8335549927df1dc16",
"text": "Rapid evolution in technology and the internet brought us to the era of online services. E-commerce is nothing but trading goods or services online. Many customers share their good or bad opinions about products or services online nowadays. These opinions become a part of the decision-making process of consumer and make an impact on the business model of the provider. Also, understanding and considering reviews will help to gain the trust of the customer which will help to expand the business. Many users give reviews for the single product. Such thousands of review can be analyzed using big data effectively. The results can be presented in a convenient visual form for the non-technical user. Thus, the primary goal of research work is the classification of customer reviews given for the product in the map-reduce framework.",
"title": ""
},
{
"docid": "1de2d4e5b74461c142e054ffd2e62c2d",
"text": "Table : Comparisons of CNN, LSTM and SWEM architectures. Columns correspond to the number of compositional parameters, computational complexity and sequential operations, respectively. v Consider a text sequence represented as X, composed of a sequence of words. Let {v#, v$, ...., v%} denote the respective word embeddings for each token, where L is the sentence/document length; v The compositional function, X → z, aims to combine word embeddings into a fixed-length sentence/document representation z. Typically, LSTM or CNN are employed for this purpose;",
"title": ""
}
] |
scidocsrr
|
6dc25cce5e69a89a3b8e06723b61693b
|
Predictive translation memory: a mixed-initiative system for human language translation
|
[
{
"docid": "90fc941f6db85dd24b47fa06dd0bb0aa",
"text": "Recent debate has centered on the relative promise of focusinguser-interface research on developing new metaphors and tools thatenhance users abilities to directly manipulate objects versusdirecting effort toward developing interface agents that provideautomation. In this paper, we review principles that show promisefor allowing engineers to enhance human-computer interactionthrough an elegant coupling of automated services with directmanipulation. Key ideas will be highlighted in terms of the Lookoutsystem for scheduling and meeting management.",
"title": ""
}
] |
[
{
"docid": "d2d4b51e3d7d0172946140dacad82db8",
"text": "The integration of supply chains offers many benefits; yet, it may also render organisations more vulnerable to electronic fraud (e-fraud). E-fraud can drain on organisations’ financial resources, and can have a significant adverse effect on the ability to achieve their strategic objectives. Therefore, efraud control should be part of corporate board-level due diligence, and should be integrated into organisations’ practices and business plans. Management is responsible for taking into consideration the relevant cultural, strategic and implementation elements that inter-relate with each other and to coordinating the human, technological and financial resources necessary to designing and implementing policies and procedures for controlling e-fraud. Due to the characteristics of integrated supply chains, a move from the traditional vertical approach to a systemic, horizontal-vertical approach is necessary. Although the e-fraud risk cannot be eliminated, risk mitigation policies and processes tailored to an organisation’s particular vulnerabilities can significantly reduce the risk and may even preclude certain classes of frauds. In this paper, a conceptual framework of e-fraud control in an integrated supply chain is proposed. The proposed conceptual framework can help managers and practitioners better understand the issues and plan the activities involved in a systemic, horizontal-vertical approach to e-fraud control in an integrated supply chain, and can be a basis upon which empirical studies can be build.",
"title": ""
},
{
"docid": "0a31ab53b887cf231d7ca1a286763e5f",
"text": "Humans acquire their most basic physical concepts early in development, but continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical theories across multiple timescales and levels of abstraction. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model and human learners on a challenging task of inferring novel physical laws in microworlds given short movies. People are generally able to perform this task and behave in line with model predictions. Yet they also make systematic errors suggestive of how a top-down Bayesian approach to learning might be complemented by a more bottomup feature-based approximate inference scheme, to best explain theory learning at an algorithmic level.",
"title": ""
},
{
"docid": "79cdd24d14816f45b539f31606a3d5ee",
"text": "The huge increase in type 2 diabetes is a burden worldwide. Many marketed compounds do not address relevant aspects of the disease; they may already compensate for defects in insulin secretion and insulin action, but loss of secreting cells (β-cell destruction), hyperglucagonemia, gastric emptying, enzyme activation/inhibition in insulin-sensitive cells, substitution or antagonizing of physiological hormones and pathways, finally leading to secondary complications of diabetes, are not sufficiently addressed. In addition, side effects for established therapies such as hypoglycemias and weight gain have to be diminished. At present, nearly 1000 compounds have been described, and approximately 180 of these are going to be developed (already in clinical studies), some of them directly influencing enzyme activity, influencing pathophysiological pathways, and some using G-protein-coupled receptors. In addition, immunological approaches and antisense strategies are going to be developed. Many compounds are derived from physiological compounds (hormones) aiming at improving their kinetics and selectivity, and others are chemical compounds that were obtained by screening for a newly identified target in the physiological or pathophysiological machinery. In some areas, great progress is observed (e.g., incretin area); in others, no great progress is obvious (e.g., glucokinase activators), and other areas are not recommended for further research. For all scientific areas, conclusions with respect to their impact on diabetes are given. Potential targets for which no chemical compound has yet been identified as a ligand (agonist or antagonist) are also described.",
"title": ""
},
{
"docid": "4acc30bade98c1257ab0a904f3695f3d",
"text": "Manoeuvre assistance is currently receiving increasing attention from the car industry. In this article we focus on the implementation of a reverse parking assistance and more precisely, a reverse parking manoeuvre planner. This paper is based on a manoeuvre planning technique presented in previous work and specialised in planning reverse parking manoeuvre. Since a key part of the previous method was not explicited, our goal in this paper is to present a practical and reproducible way to implement a reverse parking manoeuvre planner. Our implementation uses a database engine to search for the elementary movements that will make the complete parking manoeuvre. Our results have been successfully tested on a real platform: the CSIRO Autonomous Tractor.",
"title": ""
},
{
"docid": "045a56e333b1fe78677b8f4cc4c20ecc",
"text": "Swarm robotics is an approach to collective robotics that takes inspiration from the self-organized behaviors of social animals. Through simple rules and local interactions, swarm robotics aims at designing robust, scalable, and flexible collective behaviors for the coordination of large numbers of robots. In this paper, we analyze the literature from the point of view of swarm engineering: we focus mainly on ideas and concepts that contribute to the advancement of swarm robotics as an engineering field and that could be relevant to tackle real-world applications. Swarm engineering is an emerging discipline that aims at defining systematic and well founded procedures for modeling, designing, realizing, verifying, validating, operating, and maintaining a swarm robotics system. We propose two taxonomies: in the first taxonomy, we classify works that deal with design and analysis methods; in the second taxonomy, we classify works according to the collective behavior studied. We conclude with a discussion of the current limits of swarm robotics as an engineering discipline and with suggestions for future research directions.",
"title": ""
},
{
"docid": "c798c5c19dddb968f15f7bc7734ac2e4",
"text": "Information extraction relevant to the user queries is the challenging task in the ontology environment due to data varieties such as image, video, and text. The utilization of appropriate semantic entities enables the content-based search on annotated text. Recently, the automatic extraction of textual content in the audio-visual content is an advanced research area in a multimedia (MM) environment. The annotation of the video includes several tags and comments. This paper proposes the Collaborative Tagging (CT) model based on the Block Acquiring Page Segmentation (BAPS) method to retrieve the tag-based information. The information extraction in this model includes the Ontology-Based Information Extraction (OBIE) based on the single ontology utilization. The semantic annotation phase in the proposed work inserts the metadata with limited machine-readable terms. The insertion process is split into two major processes such as database uploading to server and extraction of images/web pages based on the results of semantic phase. Novel weight-based novel clustering algorithms are introduced to extract knowledge from MM contents. The ranking based on the weight value in the semantic annotation phase supports the image/web page retrieval process effectively. The comparative analysis of the proposed BAPS-CT with the existing information retrieval (IR) models regarding the average precision rate, time cost, and storage space rate assures the effectiveness of BAPS-CT in OMIR.",
"title": ""
},
{
"docid": "87835d75704f493639744abbf0119bdb",
"text": "Developers of cloud-scale applications face a difficult decision of which kind of storage to use, summarised by the CAP theorem. Currently the choice is between classical CP databases, which provide strong guarantees but are slow, expensive, and unavailable under partition, and NoSQL-style AP databases, which are fast and available, but too hard to program against. We present an alternative: Cure provides the highest level of guarantees that remains compatible with availability. These guarantees include: causal consistency (no ordering anomalies), atomicity (consistent multi-key updates), and support for high-level data types (developer friendly API) with safe resolution of concurrent updates (guaranteeing convergence). These guarantees minimise the anomalies caused by parallelism and distribution, thus facilitating the development of applications. This paper presents the protocols for highly available transactions, and an experimental evaluation showing that Cure is able to achieve scalability similar to eventually-consistent NoSQL databases, while providing stronger guarantees.",
"title": ""
},
{
"docid": "b9720d1350bf89c8a94bb30276329ce2",
"text": "Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.",
"title": ""
},
{
"docid": "259972cd20a1f763b07bef4619dc7f70",
"text": "This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.",
"title": ""
},
{
"docid": "4161b52b832c0b80d0815b9e80a5dda0",
"text": "Machine Comprehension (MC) is a challenging task in Natural Language Processing field, which aims to guide the machine to comprehend a passage and answer the given question. Many existing approaches on MC task are suffering the inefficiency in some bottlenecks, such as insufficient lexical understanding, complex question-passage interaction, incorrect answer extraction and so on. In this paper, we address these problems from the viewpoint of how humans deal with reading tests in a scientific way. Specifically, we first propose a novel lexical gating mechanism to dynamically combine the words and characters representations. We then guide the machines to read in an interactive way with attention mechanism and memory network. Finally we add a checking layer to refine the answer for insurance. The extensive experiments on two popular datasets SQuAD and TriviaQA show that our method exceeds considerable performance than most stateof-the-art solutions at the time of submission.",
"title": ""
},
{
"docid": "abbafaaf6a93e2a49a692690d4107c9a",
"text": "Virtual teams have become a ubiquitous form of organizing, but the impact of social structures within and between teams on group performance remains understudied. This paper uses the case study of a massively multiplayer online game and server log data from over 10,000 players to examine the connection between group social capital (operationalized through guild network structure measures) and team effectiveness, given a variety of in-game social networks. Three different networks, social, task, and exchange networks, are compared and contrasted while controlling for group size, group age, and player experience. Team effectiveness is maximized at a roughly moderate level of closure across the networks, suggesting that this is the optimal level of the groupâs network density. Guilds with high brokerage, meaning they have diverse connections with other groups, were more effective in achievement-oriented networks. In addition, guilds with central leaders were more effective when they teamed up with other guild leaders.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "8dc3ba4784ea55183e96b466937d050b",
"text": "One of the major problems that clinical neuropsychology has had in memory clinics is to apply ecological, easily administrable and sensitive tests that can make the diagnosis of dementia both precocious and reliable. Often the choice of the best neuropsychological test is hard because of a number of variables that can influence a subject’s performance. In this regard, tests originally devised to investigate cognitive functions in healthy adults are not often appropriate to analyze cognitive performance in old subjects with low education because of their intrinsically complex nature. In the present paper, we present normative values for the Rey–Osterrieth Complex Figure B Test (ROCF-B) a simple test that explores constructional praxis and visuospatial memory. We collected normative data of copy, immediate and delayed recall of the ROCF-B in a group of 346 normal Italian subjects above 40 years. A multiple regression analysis was performed to evaluate the potential effect of age, sex, and education on the three tasks administered to the subjects. Age and education had a significant effect on copying, immediate recall, and delayed recall as well as on the rate of forgetting. Correction grids and equivalent scores with cut-off values relative to each task are available. The availability of normative values can make the ROCF-B a valid instrument to assess non-verbal memory in adults and in the elderly for whom the commonly used ROCF-A is too demanding.",
"title": ""
},
{
"docid": "65eb604a2d45f29923ba24976130adc1",
"text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.",
"title": ""
},
{
"docid": "5dec0745ee631ec4ffbed6402093e35b",
"text": "BACKGROUND\nAdolescent breast hypertrophy can have long-term negative medical and psychological impacts. In select patients, breast reduction surgery is the best treatment. Unfortunately, many in the general and medical communities hold certain misconceptions regarding the indications and timing of this procedure. Several etiologies of adolescent breast hypertrophy, including juvenile gigantomastia, adolescent macromastia, and obesity-related breast hypertrophy, complicate the issue. It is our hope that this paper will clarify these misconceptions through a combined retrospective and literature review.\n\n\nMETHODS\nA retrospective review was conducted looking at adolescent females (≤18 years old) who had undergone bilateral breast reduction surgery. Their preoperative comorbidities, BMI, reduction volume, postoperative complications, and subjective satisfaction were recorded. In addition, a literature review was completed.\n\n\nRESULTS\n34 patients underwent bilateral breast reduction surgery. The average BMI was 29.5 kg/m(2). The average volume resected during bilateral breast reductions was 1820.9 g. Postoperative complications include dehiscence (9%), infection (3%), and poor scarring (6%). There were no cases of recurrence or need for repeat operation. Self-reported patient satisfaction was 97%. All patients described significant improvements in self body-image and participation in social activities. The literature review yielded 25 relevant reported articles, 24 of which are case studies.\n\n\nCONCLUSION\nReduction mammaplasty is safe and effective. It is the preferred treatment method for breast hypertrophy in the adolescent female and may be the only way to alleviate the increased social, psychological, and physical strain caused by this condition.",
"title": ""
},
{
"docid": "353fae3edb830aa86db682f28f64fd90",
"text": "The penetration of renewable resources in power system has been increasing in recent years. Many of these resources are uncontrollable and variable in nature, wind in particular, are relatively unpredictable. At high penetration levels, volatility of wind power production could cause problems for power system to maintain system security and reliability. One of the solutions being proposed to improve reliability and performance of the system is to integrate energy storage devices into the network. In this paper, unit commitment and dispatch schedule in power system with and without energy storage is examined for different level of wind penetration. Battery energy storage (BES) is considered as an alternative solution to store energy. The SCUC formulation and solution technique with wind power and BES is presented. The proposed formulation and model is validated with eight-bus system case study. Further, a discussion on the role of BES on locational pricing, economic, peak load shaving, and transmission congestion management had been made.",
"title": ""
},
{
"docid": "260e574e9108e05b98df7e4ed489e5fc",
"text": "Why are we not living yet with robots? If robots are not common everyday objects, it is maybe because we have looked for robotic applications without considering with sufficient attention what could be the experience of interacting with a robot. This article introduces the idea of a value profile, a notion intended to capture the general evolution of our experience with different kinds of objects. After discussing value profiles of commonly used objects, it offers a rapid outline of the challenging issues that must be investigated concerning immediate, short-term and long-term experience with robots. Beyond science-fiction classical archetypes, the picture emerging from this analysis is the one of versatile everyday robots, autonomously developing in interaction with humans, communicating with one another, changing shape and body in order to be adapted to their various context of use. To become everyday objects, robots will not necessary have to be useful, but they will have to be at the origins of radically new forms of experiences.",
"title": ""
},
{
"docid": "60ff841b0b13442c2afd5dd73178145a",
"text": "Detecting inferences in documents is critical for ensuring privacy when sharing information. In this paper, we propose a refined and practical model of inference detection using a reference corpus. Our model is inspired by association rule mining: inferences are based on word co-occurrences. Using the model and taking the Web as the reference corpus, we can find inferences and measure their strength through web-mining algorithms that leverage search engines such as Google or Yahoo!.\n Our model also includes the important case of private corpora, to model inference detection in enterprise settings in which there is a large private document repository. We find inferences in private corpora by using analogues of our Web-mining algorithms, relying on an index for the corpus rather than a Web search engine.\n We present results from two experiments. The first experiment demonstrates the performance of our techniques in identifying all the keywords that allow for inference of a particular topic (e.g. \"HIV\") with confidence above a certain threshold. The second experiment uses the public Enron e-mail dataset. We postulate a sensitive topic and use the Enron corpus and the Web together to find inferences for the topic.\n These experiments demonstrate that our techniques are practical, and that our model of inference based on word co-occurrence is well-suited to efficient inference detection.",
"title": ""
},
{
"docid": "f82a49434548e1aa09792877d84b296c",
"text": "Rats and mice have a tendency to interact more with a novel object than with a familiar object. This tendency has been used by behavioral pharmacologists and neuroscientists to study learning and memory. A popular protocol for such research is the object-recognition task. Animals are first placed in an apparatus and allowed to explore an object. After a prescribed interval, the animal is returned to the apparatus, which now contains the familiar object and a novel object. Object recognition is distinguished by more time spent interacting with the novel object. Although the exact processes that underlie this 'recognition memory' requires further elucidation, this method has been used to study mutant mice, aging deficits, early developmental influences, nootropic manipulations, teratological drug exposure and novelty seeking.",
"title": ""
},
{
"docid": "b42cd71b23c933f7b07d270edc1ce53b",
"text": "We propose a modification of the cost function of the Hopfield model whose salient features shine in its Taylor expansion and result in more than pairwise interactions with alternate signs, suggesting a unified framework for handling both with deep learning and network pruning. In our analysis, we heavily rely on the Hamilton-Jacobi correspondence relating the statistical model with a mechanical system. In this picture, our model is nothing but the relativistic extension of the original Hopfield model (whose cost function is a quadratic form in the Mattis magnetization which mimics the non-relativistic Hamiltonian for a free particle). We focus on the low-storage regime and solve the model analytically by taking advantage of the mechanical analogy, thus obtaining a complete characterization of the free energy and the associated self-consistency equations in the thermodynamic limit. On the numerical side, we test the performances of our proposal with MC simulations, showing that the stability of spurious states (limiting the capabilities of the standard Hebbian construction) is sensibly reduced due to presence of unlearning contributions in this extended framework.",
"title": ""
}
] |
scidocsrr
|
864571bb992259be037a73252faea145
|
BreakingNews: Article Annotation by Image and Text Processing
|
[
{
"docid": "10365680ff0a5da9b97727bf40432aae",
"text": "In this paper, we investigate the contextualization of news documents with geographic and visual information. We propose a matrix factorization approach to analyze the location relevance for each news document. We also propose a method to enrich the document with a set of web images. For location relevance analysis, we first perform toponym extraction and expansion to obtain a toponym list from news documents. We then propose a matrix factorization method to estimate the location-document relevance scores while simultaneously capturing the correlation of locations and documents. For image enrichment, we propose a method to generate multiple queries from each news document for image search and then employ an intelligent fusion approach to collect a set of images from the search results. Based on the location relevance analysis and image enrichment, we introduce a news browsing system named NewsMap which can support users in reading news via browsing a map and retrieving news with location queries. The news documents with the corresponding enriched images are presented to help users quickly get information. Extensive experiments demonstrate the effectiveness of our approaches.",
"title": ""
}
] |
[
{
"docid": "14868b01ec5f7f6d4005331e592f756d",
"text": "The proposed next-generation air traffic control system depends crucially on a surveillance technology called ADS-B. By 2020, nearly all aircraft flying through U.S. airspace must carry ADS-B transponders to continuously transmit their precise real-time location and velocity to ground-based air traffic control and to other en route aircraft. Surprisingly, the ADS-B protocol has no built-in security mechanisms, which renders ADS-B systems vulnerable to a wide range of malicious attacks. Herein, we address the question “can cryptography secure ADS-B?”— in other words, is there a practical and effective cryptographic solution that can be retrofit to the existing ADS-B system and enhance the security of this critical aviation technology?",
"title": ""
},
{
"docid": "80d920f1f886b81e167d33d5059b8afe",
"text": "Agriculture is one of the most important aspects of human civilization. The usages of information and communication technologies (ICT) have significantly contributed in the area in last two decades. Internet of things (IOT) is a technology, where real life physical objects (e.g. sensor nodes) can work collaboratively to create an information based and technology driven system to maximize the benefits (e.g. improved agricultural production) with minimized risks (e.g. environmental impact). Implementation of IOT based solutions, at each phase of the area, could be a game changer for whole agricultural landscape, i.e. from seeding to selling and beyond. This article presents a technical review of IOT based application scenarios for agriculture sector. The article presents a brief introduction to IOT, IOT framework for agricultural applications and discusses various agriculture specific application scenarios, e.g. farming resource optimization, decision support system, environment monitoring and control systems. The article concludes with the future research directions in this area.",
"title": ""
},
{
"docid": "689c2bac45b0933994337bd28ce0515d",
"text": "Jealousy is a powerful emotional force in couples' relationships. In just seconds it can turn love into rage and tenderness into acts of control, intimidation, and even suicide or murder. Yet it has been surprisingly neglected in the couples therapy field. In this paper we define jealousy broadly as a hub of contradictory feelings, thoughts, beliefs, actions, and reactions, and consider how it can range from a normative predicament to extreme obsessive manifestations. We ground jealousy in couples' basic relational tasks and utilize the construct of the vulnerability cycle to describe processes of derailment. We offer guidelines on how to contain the couple's escalation, disarm their ineffective strategies and power struggles, identify underlying vulnerabilities and yearnings, and distinguish meanings that belong to the present from those that belong to the past, or to other contexts. The goal is to facilitate relational and personal changes that can yield a better fit between the partners' expectations.",
"title": ""
},
{
"docid": "e07756fb1ae9046c3b8c29b85a00bf0f",
"text": "We present a clustering scheme that combines a mode-seeking phase with a cluster merging phase in the corresponding density map. While mode detection is done by a standard graph-based hill-climbing scheme, the novelty of our approach resides in its use of topological persistence to guide the merging of clusters. Our algorithm provides additional feedback in the form of a set of points in the plane, called a persistence diagram (PD), which provably reflects the prominences of the modes of the density. In practice, this feedback enables the user to choose relevant parameter values, so that under mild sampling conditions the algorithm will output the correct number of clusters, a notion that can be made formally sound within persistence theory. In addition, the output clusters have the property that their spatial locations are bound to the ones of the basins of attraction of the peaks of the density.\n The algorithm only requires rough estimates of the density at the data points, and knowledge of (approximate) pairwise distances between them. It is therefore applicable in any metric space. Meanwhile, its complexity remains practical: although the size of the input distance matrix may be up to quadratic in the number of data points, a careful implementation only uses a linear amount of memory and takes barely more time to run than to read through the input.",
"title": ""
},
{
"docid": "58d629b3ac6bd731cd45126ce3ed8494",
"text": "The Support Vector Machine (SVM) is a common machine learning tool that is widely used because of its high classification accuracy. Implementing SVM for embedded real-time applications is very challenging because of the intensive computations required. This increases the attractiveness of implementing SVM on hardware platforms for reaching high performance computing with low cost and power consumption. This paper provides the first comprehensive survey of current literature (2010-2015) of different hardware implementations of SVM classifier on Field-Programmable Gate Array (FPGA). A classification of existing techniques is presented, along with a critical analysis and discussion. A challenging trade-off between meeting embedded real-time systems constraints and high classification accuracy has been observed. Finally, some key future research directions are suggested.",
"title": ""
},
{
"docid": "101af2d0539fa1470e8acfcf7c728891",
"text": "OnlineEnsembleLearning",
"title": ""
},
{
"docid": "38aeacd5d85523b494010debd69f4bac",
"text": "We propose to train trading systems by optimizing financial objective functions via reinforcement learning. The performance functions that we consider as value functions are profit or wealth, the Sharpe ratio and our recently proposed ifferential Sharpe ratio for online learning. In Moody & Wu (1997), we presented empirical results in controlled experiments that demonstrated the advantages of reinforcement learning relative to supervised learning. Here we extend our previous work to compare Q-Learning to a reinforcement learning technique based on real-time recurrent learning (RTRL) that maximizes immediate reward. Our simulation results include a spectacular demonstration of the presence of predictability in the monthly Standard and Poors 500 stock index for the 25 year period 1970 through 1994. Our reinforcement trader achieves a simulated out-of-sample profit of over 4000% for this period, compared to the return for a buy and hold strategy of about 1300% (with dividends reinvested). This superior result is achieved with substantially lower isk.",
"title": ""
},
{
"docid": "feb34f36aed8e030f93c0adfbe49ee8b",
"text": "Complex queries containing outer joins are, for the most part, executed by commercial DBMS products in an \"as written\" manner. Only a very few reorderings of the operations are considered and the benefits of considering comprehensive reordering schemes are not exploited. This is largely due to the fact there are no readily usable results for reordering such operations for relations with duplicates and/or outer join predicates that are other than \"simple.\" Most previous approaches have ignored duplicates and complex predicates; the very few that have considered these aspects have suggested approaches that lead to a possibly exponential number of, and redundant intermediate joins. Since traditional query graph models are inadequate for modeling outer join queries with complex predicates, we present the needed hypergraph abstraction and algorithms for reordering such queries with joins and outer joins. As a result, the query optimizer can explore a significantly larger space of execution plans, and choose one with a low cost. Further, these algorithms are easily incorporated into well known and widely used enumeration methods such as dynamic programming.",
"title": ""
},
{
"docid": "97e358d68b3593efd2e0ae553bbe96a5",
"text": "Malware authors evade the signature based detection by packing the original malware using custom packers. In this paper, we present a static heuristics based approach for the detection of packed executables. We present 1) the PE heuristics considered for analysis and taxonomy of heuristics; 2) a method for computing the score using power distance based on weights and risks assigned to the defined heuristics; and 3) classification of packed executable based on the threshold obtained with the training data set, and the results achieved with the test data set. The experimental results show that our approach has a high detection rate of 99.82% with a low false positive rate of 2.22%. We also bring out difficulties in detecting packed DLL, CLR and Debug mode executables via header analysis.",
"title": ""
},
{
"docid": "b47d53485704f4237e57d220640346a7",
"text": "Features of consciousness difficult to understand in terms of conventional neuroscience have evoked application of quantum theory, which describes the fundamental behavior of matter and energy. In this paper we propose that aspects of quantum theory (e.g. quantum coherence) and of a newly proposed physical phenomenon of quantum wave function \"self-collapse\" (objective reduction: OR Penrose, 1994) are essential for consciousness, and occur in cytoskeletal microtubules and other structures within each of the brain's neurons. The particular characteristics of microtubules suitable for quantum effects include their crystal-like lattice structure, hollow inner core, organization of cell function and capacity for information processing. We envisage that conformational states of microtubule subunits (tubulins) are coupled to internal quantum events, and cooperatively interact (compute) with other tubulins. We further assume that macroscopic coherent superposition of quantum-coupled tubulin conformational states occurs throughout significant brain volumes and provides the global binding essential to consciousness. We equate the emergence of the microtubule quantum coherence with pre-conscious processing which grows (for up to 500 ms) until the mass energy difference among the separated states of tubulins reaches a threshold related to quantum gravity. According to the arguments for OR put forth in Penrose (1994), superpositioned states each have their own space-time geometries. When the degree of coherent mass energy difference leads to sufficient separation of space time geometry, the system must choose and decay (reduce, collapse) to a single universe state. In this way, a transient superposition of slightly differing space-time geometries persists until an abrupt quantum --, classical reduction occurs. Unlike the random, \"subjective reduction\" (SR, or R) of standard quantum theory caused by observation or environmental entanglement, the OR we propose in microtubules is a se(f-collapse and it results in particular patterns of microtubule-tubulin conformational states that regulate neuronal activities including synaptic functions. Possibilities and probabilities for post-reduction tubulin states are influenced by factors including attachments of microtubule-associated proteins (MAPs) acting as \"nodes\" which tune and \"orchestrate\" the quantum oscillations. We thus term the self-tuning OR process in microtubules \"orchestrated objective reduction\" (\"Orch OR\"), and calculate an estimate for the number of tubulins (and neurons) whose coherence for relevant time periods (e.g. 500ms) will elicit Orch OR. In providing a connection among (1) pre-conscious to conscious transition, (2) fundamental space time notions, (3) non-computability, and (4) binding of various (time scale and spatial) reductions into an instantaneous event (\"conscious now\"), we believe Orch OR in brain microtubules is the most specific and plausible model for consciousness yet proposed. * Corresponding author. Tel.: (520) 626-2116. Fax: (520) 626-2689. E-Mail: srh(cv ccit.arizona.edu. 0378-4754/96/$15.00 © 1996 Elsevier Science B.V. All rights reserved SSDI0378-4754(95 ) 0049-6 454 S. Hameroff, R. Penrose/Mathematics and Computers in Simulation 40 (1996) 453 480",
"title": ""
},
{
"docid": "345e6a4f17eeaca196559ed55df3862e",
"text": "Synaptic plasticity, the putative basis of learning and memory formation, manifests in various forms and across different timescales. Here we show that the interaction of Hebbian homosynaptic plasticity with rapid non-Hebbian heterosynaptic plasticity is, when complemented with slower homeostatic changes and consolidation, sufficient for assembly formation and memory recall in a spiking recurrent network model of excitatory and inhibitory neurons. In the model, assemblies were formed during repeated sensory stimulation and characterized by strong recurrent excitatory connections. Even days after formation, and despite ongoing network activity and synaptic plasticity, memories could be recalled through selective delay activity following the brief stimulation of a subset of assembly neurons. Blocking any component of plasticity prevented stable functioning as a memory network. Our modelling results suggest that the diversity of plasticity phenomena in the brain is orchestrated towards achieving common functional goals.",
"title": ""
},
{
"docid": "a30a40f97b688cd59005434bc936e4ef",
"text": "The Semantic Web works on the existing Web which presents the meaning of information as well-defined vocabularies understood by the people. Semantic Search, at the same time, works on improving the accuracy of a search by understanding the intent of the search and providing contextually relevant results. The paper describes a semantic approach towards web search through a PHP application. The goal was to parse through a user’s browsing history and return semantically relevant web pages for the search query provided. The browser used for this purpose was Mozilla Firefox. The user’s history was stored in a MySQL database, which, in turn, was accessed using PHP. The ontology, created from the browsing history, was then parsed for the entered search query and the corresponding results were returned to the user providing a semantically organized and relevant output.",
"title": ""
},
{
"docid": "51b8fe57500d1d74834d1f9faa315790",
"text": "Simulations of smoke are pervasive in the production of visual effects for commercials, movies and games: from cigarette smoke and subtle dust to large-scale clouds of soot and vapor emanating from fires and explosions. In this talk we present a new Eulerian method that targets the simulation of such phenomena on a structured spatially adaptive voxel grid --- thereby achieving an improvement in memory usage and computational performance over regular dense and sparse grids at uniform resolution. Contrary to e.g. Setaluri et al. [2014], we use velocities collocated at voxel corners which allows sharper interpolation for spatially adaptive simulations, is faster for sampling, and promotes ease-of-use in an open procedural environment where technical artists often construct small computational graphs that apply forces, dissipation etc. to the velocities. The collocated method requires special treatment when projecting out the divergent velocity modes to prevent non-physical high frequency oscillations (not addressed by Ferstl et al. [2014]). To this end we explored discretization and filtering methods from computational physics, combining them with a matrix-free adaptive multigrid scheme based on MLAT and FAS [Trottenberg and Schuller 2001]. Finally we contribute a new volumetric quadrature approach to temporally smooth emission which outperforms e.g. Gaussian quadrature at large time steps. We have implemented our method in the cross-platform Autodesk Bifrost procedural environment which facilitates customization by the individual technical artist, and our implementation is in production use at several major studios. We refer the reader to the accompanying video for examples that illustrate our novel workflows for spatially adaptive simulations and the benefits of our approach. We note that several methods for adaptive fluid simulation have been proposed in recent years, e.g. [Ferstl et al. 2014; Setaluri et al. 2014], and we have drawn a lot of inspiration from these. However, to the best of our knowledge we are the first in computer graphics to propose a collocated velocity, spatially adaptive and matrix-free smoke simulation method that explicitly mitigates non-physical divergent modes.",
"title": ""
},
{
"docid": "4a6d48bd0f214a94f2137f424dd401eb",
"text": "During the past decade, scientific research has provided new insight into the development from an acute, localised musculoskeletal disorder towards chronic widespread pain/fibromyalgia (FM). Chronic widespread pain/FM is characterised by sensitisation of central pain pathways. An in-depth review of basic and clinical research was performed to design a theoretical framework for manual therapy in these patients. It is explained that manual therapy might be able to influence the process of chronicity in three different ways. (I) In order to prevent chronicity in (sub)acute musculoskeletal disorders, it seems crucial to limit the time course of afferent stimulation of peripheral nociceptors. (II) In the case of chronic widespread pain and established sensitisation of central pain pathways, relatively minor injuries/trauma at any locations are likely to sustain the process of central sensitisation and should be treated appropriately with manual therapy accounting for the decreased sensory threshold. Inappropriate pain beliefs should be addressed and exercise interventions should account for the process of central sensitisation. (III) However, manual therapists ignoring the processes involved in the development and maintenance of chronic widespread pain/FM may cause more harm then benefit to the patient by triggering or sustaining central sensitisation.",
"title": ""
},
{
"docid": "dac4ee56923c850874f8c6199456a245",
"text": "In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.",
"title": ""
},
{
"docid": "4dc20aa2c72a95022ba6cf3b592960a8",
"text": "Relation Classification aims to classify the semantic relationship between two marked entities in a given sentence. It plays a vital role in a variety of natural language processing applications. Most existing methods focus on exploiting mono-lingual data, e.g., in English, due to the lack of annotated data in other languages. In this paper, we come up with a feature adaptation approach for cross-lingual relation classification, which employs a generative adversarial network (GAN) to transfer feature representations from one language with rich annotated data to another language with scarce annotated data. Such a feature adaptation approach enables feature imitation via the competition between a relation classification network and a rival discriminator. Experimental results on the ACE 2005 multilingual training corpus, treating English as the source language and Chinese the target, demonstrate the effectiveness of our proposed approach, yielding an improvement of 5.7% over the state-of-the-art.",
"title": ""
},
{
"docid": "b0d9c5716052e9cfe9d61d20e5647c8c",
"text": "We propose Efficient Neural Architecture Search (ENAS), a faster and less expensive approach to automated model design than previous methods. In ENAS, a controller learns to discover neural network architectures by searching for an optimal path within a larger model. The controller is trained with policy gradient to select a path that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected path is trained to minimize the cross entropy loss. On the Penn Treebank dataset, ENAS can discover a novel architecture thats achieves a test perplexity of 57.8, which is state-of-the-art among automatic model design methods on Penn Treebank. On the CIFAR-10 dataset, ENAS can design novel architectures that achieve a test error of 2.89%, close to the 2.65% achieved by standard NAS (Zoph et al., 2017). Most importantly, our experiments show that ENAS is more than 10x faster and 100x less resource-demanding than NAS.",
"title": ""
},
{
"docid": "49f1d3ebaf3bb3e575ac3e40101494d9",
"text": "This paper discusses the current status of research on fraud detection undertaken a.s part of the European Commissionfunded ACTS ASPECT (Advanced Security for Personal Communications Technologies) project, by Royal Holloway University of London. Using a recurrent neural network technique, we uniformly distribute prototypes over Toll Tickets. sampled from the U.K. network operator, Vodafone. The prototypes, which continue to adapt to cater for seasonal or long term trends, are used to classify incoming Toll Tickets to form statistical behaviour proFdes covering both the short and long-term past. These behaviour profiles, maintained as probability distributions, comprise the input to a differential analysis utilising a measure known as the HeUinger distance[5] between them as an alarm criteria. Fine tuning the system to minimise the number of false alarms poses a significant ask due to the low fraudulent/non fraudulent activity ratio. We benefit from using unsupervised learning in that no fraudulent examples ate requited for training. This is very relevant considering the currently secure nature of GSM where fraud scenarios, other than Subscription Fraud, have yet to manifest themselves. It is the aim of ASPECT to be prepared for the would-be fraudster for both GSM and UMTS, Introduction When a mobile originated phone call is made or various inter-call criteria are met he cells or switches that a mobile phone is communicating with produce information pertaining to the call attempt. These data records, for billing purposes, are referred to as Toll Tickets. Toll Tickets contain a wealth of information about the call so that charges can be made to the subscriber. By considering well studied fraud indicators these records can also be used to detect fraudulent activity. By this we mean i terrogating a series of recent Toll Tickets and comparing a function of the various fields with fixed criteria, known as triggers. A trigger, if activated, raises an alert status which cumulatively would lead to investigation by the network operator. Some xample fraud indicators are that of a new subscriber making long back-to-back international calls being indicative of direct call selling or short back-to-back calls to a single land number indicating an attack on a PABX system. Sometimes geographical information deduced from the cell sites visited in a call can indicate cloning. This can be detected through setting a velocity trap. Fixed trigger criteria can be set to catch such extremes of activity, but these absolute usage criteria cannot trap all types of fraud. An alternative approach to the problem is to perform a differential analysis. Here we develop behaviour profiles relating to the mobile phone’s activity and compare its most recent activities with a longer history of its usage. Techniques can then be derived to determine when the mobile phone’s behaviour changes ignificantly. One of the most common indicators of fraud is a significant change in behaviour. The performance expectations of such a system must be of prime concern when developing any fraud detection strategy. To implement a real time fraud detection tool on the Vodafone network in the U.K, it was estimated that, on average, the system would need to be able to process around 38 Toll Tickets per second. This figure varied with peak and off-peak usage and also had seasonal trends. The distribution of the times that calls are made and the duration of each call is highly skewed. Considering all calls that are made in the U.K., including the use of supplementary services, we found the average call duration to be less than eight seconds, hardly time to order a pizza. In this paper we present one of the methods developed under ASPECT that tackles the problem of skewed distributions and seasonal trends using a recurrent neural network technique that is based around unsupervised learning. We envisage this technique would form part of a larger fraud detection suite that also comprises a rule based fraud detection tool and a neural network fraud detection tool that uses supervised learning on a multi-layer perceptron. Each of the systems has its strengths and weaknesses but we anticipate that the hybrid system will combine their strengths. 9 From: AAAI Technical Report WS-97-07. Compilation copyright © 1997, AAAI (www.aaai.org). All rights reserved.",
"title": ""
},
{
"docid": "8721382dd1674fac3194d015b9c64f94",
"text": "fines excipients as “substances, other than the active drug substance of finished dosage form, which have been appropriately evaluated for safety and are included in a drug delivery system to either aid the processing of the drug delivery system during its manufacture; protect; support; enhance stability, bioavailability, or patient acceptability; assist in product identification; or enhance any other attributes of the overall safety and effectiveness of the drug delivery system during storage or use” (1). This definition implies that excipients serve a purpose in a formulation and contrasts with the old terminology, inactive excipients, which hints at the property of inertness. With a literal interpretation of this definition, an excipient can include diverse molecules or moieties such as replication incompetent viruses (adenoviral or retroviral vectors), bacterial protein components, monoclonal antibodies, bacteriophages, fusion proteins, and molecular chimera. For example, using gene-directed enzyme prodrug therapy, research indicated that chimera containing a transcriptional regulatory DNA sequence capable of being selectively activated in mammalian cells was linked to a sequence that encodes a -lactamase enzyme and delivered to target cells (2). The expressed enzyme in the targeted cells catalyzes the conversion of a subsequently administered prodrug to a toxic agent. A similar purpose is achieved by using an antibody conjugated to an enzyme followed by the administration of a noncytotoxic substance that is converted in vivo by the enzyme to its toxic form (3). In these examples, the chimera or the enzyme-linked antibody would qualify as excipients. Furthermore, many emerging delivery systems use a drug or gene covalently linked to the molecules, polymers, antibody, or chimera responsible for drug targeting, internalization, or transfection. Conventional wisdom dictates that such an entity be classified as the active substance or prodrug for regulatory purposes and be subject to one set of specifications for the entire molecule. The fact remains, however, that only a discrete part of this prodrug is responsible for the therapeutic effect, and a similar effect may be obtained by physically entrapping the drug as opposed to covalent conjugation. The situation is further complicated when fusion proteins are used as a combination of drug and delivery system or when the excipients themselves",
"title": ""
}
] |
scidocsrr
|
c79fad33fdeb2a2a15da27e3f8f904cf
|
V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets
|
[
{
"docid": "e380710014dd33734636f077a59f1b62",
"text": "Since the work of Golgi and Cajal, light microscopy has remained a key tool for neuroscientists to observe cellular properties. Ongoing advances have enabled new experimental capabilities using light to inspect the nervous system across multiple spatial scales, including ultrastructural scales finer than the optical diffraction limit. Other progress permits functional imaging at faster speeds, at greater depths in brain tissue, and over larger tissue volumes than previously possible. Portable, miniaturized fluorescence microscopes now allow brain imaging in freely behaving mice. Complementary progress on animal preparations has enabled imaging in head-restrained behaving animals, as well as time-lapse microscopy studies in the brains of live subjects. Mouse genetic approaches permit mosaic and inducible fluorescence-labeling strategies, whereas intrinsic contrast mechanisms allow in vivo imaging of animals and humans without use of exogenous markers. This review surveys such advances and highlights emerging capabilities of particular interest to neuroscientists.",
"title": ""
}
] |
[
{
"docid": "9c5b908801357f296a16558284a5b3ae",
"text": "People constantly make snap judgments about objects encountered in the environment. Such rapid judgments must be based on the physical properties of the targets, but the nature of these properties is yet unknown. We hypothesized that sharp transitions in contour might convey a sense of threat, and therefore trigger a negative bias. Our results were consistent with this hypothesis. The type of contour a visual object possesses--whether the contour is sharp angled or curved--has a critical influence on people's attitude toward that object.",
"title": ""
},
{
"docid": "5523f345b8509e8636374d14ac0cf9de",
"text": "In this paper we discuss and create a MQTT based Secured home automation system, by using mentioned sensors and using Raspberry pi B+ model as the network gateway, here we have implemented MQTT Protocol for transferring & receiving sensor data and finally getting access to those sensor data, also we have implemented ACL (access control list) to provide encryption method for the data and finally monitoring those data on webpage or any network devices. R-pi has been used as a gateway or the main server in the whole system, which has various sensor connected to it via wired or wireless communication.",
"title": ""
},
{
"docid": "f3860c0ed0803759e44133a0110a60bb",
"text": "Using comment information available from Digg we define a co-participation network between users. We focus on the analysis of this implicit network, and study the behavioral characteristics of users. Using an entropy measure, we infer that users at Digg are not highly focused and participate across a wide range of topics. We also use the comment data and social network derived features to predict the popularity of online content linked at Digg using a classification and regression framework. We show promising results for predicting the popularity scores even after limiting our feature extraction to the first few hours of comment activity that follows a Digg submission.",
"title": ""
},
{
"docid": "b206560e0c9f3e59c8b9a8bec6f12462",
"text": "A symmetrical microstrip directional coupler design using the synthesis technique without prior knowledge of the physical geometry of the directional coupler is analytically given. The introduced design method requires only the information of the port impedances, the coupling level, and the operational frequency. The analytical results are first validated by using a planar electromagnetic simulation tool and then experimentally verified. The error between the experimental and analytical results is found to be within 3% for the worst case. The design charts that give all the physical dimensions, including the length of the directional coupler versus frequency and different coupling levels, are given for alumina, Teflon, RO4003, FR4, and RF-60, which are widely used in microwave applications. The complete design of symmetrical two-line microstrip directional couplers can be obtained for the first time using our results in this paper.",
"title": ""
},
{
"docid": "c0fc94aca86a6aded8bc14160398ddea",
"text": "THE most persistent problems of recall all concern the ways in which past experiences and past reactions are utilised when anything is remembered. From a general point of view it looks as if the simplest explanation available is to suppose that when any specific event occurs some trace, or some group of traces, is made and stored up in the organism or in the mind. Later, an immediate stimulus re-excites the trace, or group of traces, and, provided a further assumption is made to the effect that the trace somehow carries with it a temporal sign, the re-excitement appears to be equivalent to recall. There is, of course, no direct evidence for such traces, but the assumption at first sight seems to be a very simple one, and so it has commonly been made.",
"title": ""
},
{
"docid": "60513bd4ef2e25915c72674734e3eda2",
"text": "InT. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 397-420). New York: Cambridge University Press,2002. This chapter introduces a theoretical framework that describes the importance of affect in guiding judgments and decisions. As used here,. affect means the specific quality of \"goodness\" or \"badness\" (1) experienced as a feeling state (with or without consciousness) and (2) demarcating a positive or negative quality of a stimulus. Affective responses occur rapidly and automatically note how quickly you sense the feelings as.sociated with the stimulus words treasure or hate. We argue that reliance on such feelings can be characterized as the affect heuristic. In this chapter, we trace the development of the affect heuristic across a variety of research paths followed by ourselves and many others. We also discuss some of the important practical implications resulting from ways that this heuristic impacts our daily lives.",
"title": ""
},
{
"docid": "62d1574e23fcf07befc54838ae2887c1",
"text": "Digital images are widely used and numerous application in different scientific fields use digital image processing algorithms where image segmentation is a common task. Thresholding represents one technique for solving that task and Kapur's and Otsu's methods are well known criteria often used for selecting thresholds. Finding optimal threshold values represents a hard optimization problem and swarm intelligence algorithms have been successfully used for solving such problems. In this paper we adjusted recent elephant herding optimization algorithm for multilevel thresholding by Kapur's and Otsu's method. Performance was tested on standard benchmark images and compared with four other swarm intelligence algorithms. Elephant herding optimization algorithm outperformed other approaches from literature and it was more robust.",
"title": ""
},
{
"docid": "2a34800bc275f062f820c0eb4597d297",
"text": "Construction sites are dynamic and complicated systems. The movement and interaction of people, goods and energy make construction safety management extremely difficult. Due to the ever-increasing amount of information, traditional construction safety management has operated under difficult circumstances. As an effective way to collect, identify and process information, sensor-based technology is deemed to provide new generation of methods for advancing construction safety management. It makes the real-time construction safety management with high efficiency and accuracy a reality and provides a solid foundation for facilitating its modernization, and informatization. Nowadays, various sensor-based technologies have been adopted for construction safety management, including locating sensor-based technology, vision-based sensing and wireless sensor networks. This paper provides a systematic and comprehensive review of previous studies in this field to acknowledge useful findings, identify the research gaps and point out future research directions.",
"title": ""
},
{
"docid": "78454419cd378a8f6d4417e4063835f5",
"text": "We present and evaluate a method for automatically detecting sentence fragments in English texts written by non-native speakers. Our method combines syntactic parse tree patterns and parts-of-speech information produced by a tagger to detect this phenomenon. When evaluated on a corpus of authentic learner texts, our best model achieved a precision of 0.84 and a recall of 0.62, a statistically significant improvement over baselines using non-parse features, as well as a popular grammar checker.",
"title": ""
},
{
"docid": "c79038936fa81d7036d00314d7405e0a",
"text": "This paper proposes a new current controller with modified decoupling and anti-windup schemes for a permanent magnet synchronous motor (PMSM) drive. In designing the controller, an improved voltage model, which is different from existing models in that it reflects all the nonlinear characteristics of the motor, is considered. In an actual PMSM, unintentional distortion occurs in inductance and flux due to magnetic saturation and structural asymmetry. In this paper, the effects of such distortion on voltage ripple are analyzed and the effect of voltage distortion on current control is analyzed in detail. Based on the voltage model, a decoupling controller is developed to effectively separate the d-q current regulators. The controller produces compensation voltages using the current error of the other axis. In addition, an anti-windup controller is designed that takes into account not only the integrator output in PI controllers but also the integrator output in decoupling controllers. The proposed current controller aimed at compensating all nonlinearities of PMSM enables high-performance operation of the motor. The feasibility of the proposed current control scheme is verified by experimental results.",
"title": ""
},
{
"docid": "aec560c27d4873674114bd5dd9d64625",
"text": "Caches consume a significant amount of energy in modern microprocessors. To design an energy-efficient microprocessor, it is important to optimize cache energy consumption. This paper examines performance and power trade-offs in cache designs and the effectiveness of energy reduction for several novel cache design techniques targeted for low power.",
"title": ""
},
{
"docid": "615dbb03f31acfce971a383fa54d7d12",
"text": "Objectives\nTo introduce blockchain technologies, including their benefits, pitfalls, and the latest applications, to the biomedical and health care domains.\n\n\nTarget Audience\nBiomedical and health care informatics researchers who would like to learn about blockchain technologies and their applications in the biomedical/health care domains.\n\n\nScope\nThe covered topics include: (1) introduction to the famous Bitcoin crypto-currency and the underlying blockchain technology; (2) features of blockchain; (3) review of alternative blockchain technologies; (4) emerging nonfinancial distributed ledger technologies and applications; (5) benefits of blockchain for biomedical/health care applications when compared to traditional distributed databases; (6) overview of the latest biomedical/health care applications of blockchain technologies; and (7) discussion of the potential challenges and proposed solutions of adopting blockchain technologies in biomedical/health care domains.",
"title": ""
},
{
"docid": "40b9004b6eb3cdbd8471df38f85d8f12",
"text": "Indoor scene understanding is central to applications such as robot navigation and human companion assistance. Over the last years, data-driven deep neural networks have outperformed many traditional approaches thanks to their representation learning capabilities. One of the bottlenecks in training for better representations is the amount of available per-pixel ground truth data that is required for core scene understanding tasks such as semantic segmentation, normal prediction, and object boundary detection. To address this problem, a number of works proposed using synthetic data. However, a systematic study of how such synthetic data is generated is missing. In this work, we introduce a large-scale synthetic dataset with 500K physically-based rendered images from 45K realistic 3D indoor scenes. We study the effects of rendering methods and scene lighting on training for three computer vision tasks: surface normal prediction, semantic segmentation, and object boundary detection. This study provides insights into the best practices for training with synthetic data (more realistic rendering is worth it) and shows that pretraining with our new synthetic dataset can improve results beyond the current state of the art on all three tasks.",
"title": ""
},
{
"docid": "a08fe0c015f5fc02b7654f3fd00fb599",
"text": "Recently, there has been considerable interest in attribute based access control (ABAC) to overcome the limitations of the dominant access control models (i.e, discretionary-DAC, mandatory-MAC and role based-RBAC) while unifying their advantages. Although some proposals for ABAC have been published, and even implemented and standardized, there is no consensus on precisely what is meant by ABAC or the required features of ABAC. There is no widely accepted ABAC model as there are for DAC, MAC and RBAC. This paper takes a step towards this end by constructing an ABAC model that has “just sufficient” features to be “easily and naturally” configured to do DAC, MAC and RBAC. For this purpose we understand DAC to mean owner-controlled access control lists, MAC to mean lattice-based access control with tranquility and RBAC to mean flat and hierarchical RBAC. Our central contribution is to take a first cut at establishing formal connections between the three successful classical models and desired ABAC models.",
"title": ""
},
{
"docid": "f8ba12d3fd6ebf65429a2ce5f5143dbd",
"text": "The contour-guided color palette (CCP) is proposed for robust image segmentation. It efficiently integrates contour and color cues of an image. To find representative colors of an image, color samples along long contours between regions, similar in spirit to machine learning methodology that focus on samples near decision boundaries, are collected followed by the mean-shift (MS) algorithm in the sampled color space to achieve an image-dependent color palette. This color palette provides a preliminary segmentation in the spatial domain, which is further fine-tuned by post-processing techniques such as leakage avoidance, fake boundary removal, and small region mergence. Segmentation performances of CCP and MS are compared and analyzed. While CCP offers an acceptable standalone segmentation result, it can be further integrated into the framework of layered spectral segmentation to produce a more robust segmentation. The superior performance of CCP-based segmentation algorithm is demonstrated by experiments on the Berkeley Segmentation Dataset.",
"title": ""
},
{
"docid": "31201cf1a9fcd93b84c2c402df9003b7",
"text": "Abstract—This paper presents a planar microstrip-fed tab monopole antenna for ultra wideband wireless communications applications. The impedance bandwidth of the antenna is improved by adding slit in one side of the monopole, introducing a tapered transition between the monopole and the feed line, and adding two-step staircase notch in the ground plane. Numerical analysis for the antenna dimensional parameters using Ansoft HFSS is performed and presented. The proposed antenna has a small size of 16 × 19 mm, and provides an ultra wide bandwidth from 2.8 to 28 GHz with low VSWR level and good radiation characteristics to satisfy the requirements of the current and future wireless communications systems.",
"title": ""
},
{
"docid": "3007b72b893b352ae89b519ad54276e9",
"text": "Natural products such as plant extracts and complex microbial secondary metabolites have recently attracted the attention of scientific world for their potential use as drugs for treating chronic diseases such as Type II diabetes. Non-Insulin-Dependent Diabetes Mellitus (NIDDM) or Type II diabetes has complicated basis and has various treatment options, each targeting different mechanism of action. One such option relies on digestive enzyme inhibition. Almost all of the currently used clinically digestive enzyme inhibitors are bacterial secondary metabolites. However in most cases understanding of their complete biosynthetic pathways remains a challenge. The currently used digestive enzyme inhibitors have significant side effects that have restricted their usage. Hence, many active plant metabolites are being investigated as more effective treatment with fewer side effects. Flavonoids, terpenoids, glycosides are few to name in that class. Many of these are proven inhibitors of digestive enzymes but their large scale production remains a technical conundrum. Their successful heterologous production in simple host bacteria in scalable quantities gives a new dimension to the continuously active research for better treatment for type II diabetes. Looking at existing and new methods of mass level production of digestive inhibitors and latest efforts to effectively discover new potential drugs is the subject of this book chapter.",
"title": ""
},
{
"docid": "0c7221ffca357ba80401551333e1080d",
"text": "The effects of temperature and current on the resistance of small geometry silicided contact structures have been characterized and modeled for the first time. Both, temperature and high current induced self heating have been shown to cause contact resistance lowering which can be significant in the performance of advanced ICs. It is demonstrated that contact-resistance sensitivity to temperature and current is controlled by the silicide thickness which influences the interface doping concentration, N. Behavior of W-plug and force-fill (FF) Al plug contacts have been investigated in detail. A simple model has been formulated which directly correlates contact resistance to temperature and N. Furthermore, thermal impedance of these contact structures have been extracted and a critical failure temperature demonstrated that can be used to design robust contact structures.",
"title": ""
},
{
"docid": "0822720d8bb0222bd7f0f758fa93ff9d",
"text": "Hydrogen can be recovered by fermentation of organic material rich in carbohydrates, but much of the organic matter remains in the form of acetate and butyrate. An alternative to methane production from this organic matter is the direct generation of electricity in a microbial fuel cell (MFC). Electricity generation using a single-chambered MFC was examined using acetate or butyrate. Power generated with acetate (800 mg/L) (506 mW/m2 or 12.7 mW/ L) was up to 66% higher than that fed with butyrate (1000 mg/L) (305 mW/m2 or 7.6 mW/L), demonstrating that acetate is a preferred aqueous substrate for electricity generation in MFCs. Power output as a function of substrate concentration was well described by saturation kinetics, although maximum power densities varied with the circuit load. Maximum power densities and half-saturation constants were Pmax ) 661 mW/m2 and Ks ) 141 mg/L for acetate (218 Ω) and Pmax ) 349 mW/m2 and Ks ) 93 mg/L for butyrate (1000 Ω). Similar open circuit potentials were obtained in using acetate (798 mV) or butyrate (795 mV). Current densities measured for stable power output were higher for acetate (2.2 A/m2) than those measured in MFCs using butyrate (0.77 A/m2). Cyclic voltammograms suggested that the main mechanism of power production in these batch tests was by direct transfer of electrons to the electrode by bacteria growing on the electrode and not by bacteria-produced mediators. Coulombic efficiencies and overall energy recovery were 10-31 and 3-7% for acetate and 8-15 and 2-5% for butyrate, indicating substantial electron and energy losses to processes other than electricity generation. These results demonstrate that electricity generation is possible from soluble fermentation end products such as acetate and butyrate, but energy recoveries should be increased to improve the overall process performance.",
"title": ""
},
{
"docid": "4d7cbe7f5e854028277f0120085b8977",
"text": "In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.",
"title": ""
}
] |
scidocsrr
|
04a656662cbf463f1f546af6f4726840
|
A Survey on Neural Network-Based Summarization Methods
|
[
{
"docid": "56826bfc5f48105387fd86cc26b402f1",
"text": "It is difficult to identify sentence importance from a single point of view. In this paper, we propose a learning-based approach to combine various sentence features. They are categorized as surface, content, relevance and event features. Surface features are related to extrinsic aspects of a sentence. Content features measure a sentence based on contentconveying words. Event features represent sentences by events they contained. Relevance features evaluate a sentence from its relatedness with other sentences. Experiments show that the combined features improved summarization performance significantly. Although the evaluation results are encouraging, supervised learning approach requires much labeled data. Therefore we investigate co-training by combining labeled and unlabeled data. Experiments show that this semisupervised learning approach achieves comparable performance to its supervised counterpart and saves about half of the labeling time cost.",
"title": ""
},
{
"docid": "64fc1433249bb7aba59e0a9092aeee5e",
"text": "In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.",
"title": ""
},
{
"docid": "0ac0f9965376f5547a2dabd3d06b6b96",
"text": "A sentence extract summary of a document is a subset of the document's sentences that contains the main ideas in the document. We present an approach to generating such summaries, a hidden Markov model that judges the likelihood that each sentence should be contained in the summary. We compare the results of this method with summaries generated by humans, showing that we obtain significantly higher agreement than do earlier methods.",
"title": ""
},
{
"docid": "c0a67a4d169590fa40dfa9d80768ef09",
"text": "Excerpts of technical papers and magazine articles that serve the purposes of conventional abstracts have been created entirely by automatic means. In the exploratory research described, the complete text of an article in machine-readable form i s scanned by a n IBM 704 data-processing machine and analyzed in accordance with a standard program. Statistical information derived from word frequency and distribution is used by the machine to compute a relative measure of significance, first for individual words and then for sentences. Sentences scoring highest in significance are extracted and printed out to become the \" auto-abstract. \" Introduction",
"title": ""
}
] |
[
{
"docid": "1384bc0c18a47630707dfebc036d8ac0",
"text": "Recent research has demonstrated the important of ontology and its applications. For example, while designing adaptive learning materials, designers need to refer to the ontology of a subject domain. Moreover, ontology can show the whole picture and core knowledge about a subject domain. Research from literature also suggested that graphical representation of ontology can reduce the problems of information overload and learning disorientation for learners. However, ontology constructions used to rely on domain experts in the past; it is a time consuming and high cost task. Ontology creation for emerging new domains like e-learning is even more challenging. The aim of this paper is to construct e-learning domain concept maps, an alternative form of ontology, from academic articles. We adopt some relevant journal articles and conferences papers in e-learning domain as data sources, and apply text-mining techniques to automatically construct concept maps for e-learning domain. The constructed concept maps can provide a useful reference for researchers, who are new to e-leaning field, to study related issues, for teachers to design adaptive courses, and for learners to understand the whole picture of e-learning domain knowledge",
"title": ""
},
{
"docid": "2630e22fb604a0657aef4c7d8e56a89f",
"text": "Social media has recently gained tremendous fame as a highly impactful channel of communication in these modern times of digitized living. It has been put on a pedestal across varied streams for facilitating participatory interaction amongst businesses, groups, societies, organizations, consumers, communities, forums, and the like. This subject has received increased attention in the literature with many of its practical applications including social media marketing (SMM) being elaborated, analysed, and recorded by many studies. This study is aimed at collating the existing research on SMM to present a review of seventy one articles that will bring together the many facets of this rapidly blooming media marketing form. The surfacing limitations in the literature on social media have also been identified and potential research directions have been offered.",
"title": ""
},
{
"docid": "5df0273a45b5ae992b421ef5b537c083",
"text": "Other ebooks & PDF you can access on our library: manual for hyster 40 forklift, moto guzzi 1000sp iii 1000 sp 3 sp3 1000sp3 motoguzzi service repair workshop manual, honeywell primus 2015 fms manual, irish wit and wisdom mini books, literacies of power what americans are not allowed to know with new commentary by shirley steinberg joe kincheloe, mexican paper cutting templates, owners manual for 94 e420, the organization and architecture of innovation the organization and architecture of innovation, old huntsman other poems, email to introduce yourself to colleagues, 2007 2009 mdx factory service repair manual, 2007 2009 mdx factory service repair manual, manual mercedes c220, manual mercedes c220, irish wit and wisdom mini books, the organization and architecture of innovation the organization and architecture of innovation, el regal de la comunicacio lb, beginning java programming the object oriented approach, my heart to keep love in xxchange volume 10, sage canes house of grace and favor a town will only rise to the standards of its women five star expressions, bobcat 3400 parts manual, inevitable circumstances, m dchenhimmel gedichte geschichten anke heimberg, labor time guide for auto repair, zomertijd zonnige verhalen, moto guzzi 1000sp iii 1000 sp 3 sp3 1000sp3 motoguzzi service repair workshop manual, draytek smart monitor manual, driving the usa in alphabetical order, driving the usa in alphabetical order, 2013 ford flex owners manual, manual mercedes c220, though mountains fall the daughters of caleb bender volume 3 , literacies of power what americans are not allowed to know with new commentary by shirley steinberg joe kincheloe, link between worlds guide, moto guzzi 1000sp iii 1000 sp 3 sp3 1000sp3 motoguzzi service repair workshop manual, owners manual for 94 e420, the american revolution the american revolution, owners manual for 94 e420, mn drivers license test study guide vietnamese, anastasia and the curse of the romanovs anastasia series ii, honeywell primus 2015 fms manual, 1978 johnson 115 horsepower manual, m2 mei jan 2014 paper, police pdr guide, gender race and national identity nations of flesh and blood routledge research in gender and society, shifters the lions share of her heart bbw lion shape shifter romance paranormal fantasy short stories, manual mercedes c220, police pdr guide, disability leave manual template, civil antisemitism modernism and british culture 1902 1939, and many other in several formats : ebook, PDF, Ms. Word, etc.",
"title": ""
},
{
"docid": "ea47210a071a275d9fdd204d0213d3d8",
"text": "In this paper the role of logic as a formal basis to exploit the query evaluation process of the boolean model and of weighted boolean models is analysed. The proposed approach is based on the expression of the constraint imposed by a query term on a document representation by means of the implication connective (by a fuzzy implication in the case of weighted terms). A logical formula corresponds to a query evaluation structure, and the degree of relevance of a document to a user query is obtained as the truth value of the formula expressing the evaluation structure of the considered query under the interpretation corresponding with a document and the query itself.",
"title": ""
},
{
"docid": "a979b0a02f2ade809c825b256b3c69d8",
"text": "The objective of this review is to analyze in detail the microscopic structure and relations among muscular fibers, endomysium, perimysium, epimysium and deep fasciae. In particular, the multilayer organization and the collagen fiber orientation of these elements are reported. The endomysium, perimysium, epimysium and deep fasciae have not just a role of containment, limiting the expansion of the muscle with the disposition in concentric layers of the collagen tissue, but are fundamental elements for the transmission of muscular force, each one with a specific role. From this review it appears that the muscular fibers should not be studied as isolated elements, but as a complex inseparable from their fibrous components. The force expressed by a muscle depends not only on its anatomical structure, but also the angle at which its fibers are attached to the intramuscular connective tissue and the relation with the epimysium and deep fasciae.",
"title": ""
},
{
"docid": "331f0702515e1705a5ac02375f1979ac",
"text": "Pavement management systems require detailed information of the current state of the roads to take appropriate actions to optimize expenditure on maintenance and rehabilitation. In particular, the presence of cracks is a cardinal aspect to be considered. This article presents a solution based on an instrumented vehicle equipped with an imaging system, two Inertial Profilers, a Differential Global Positioning System, and a webcam. Information about the state of the road is acquired at normal road speed. A method based on the use of Gabor filters is used to detect the longitudinal and transverse cracks. The methodologies used to create Gabor filter banks and the use of the filtered images as descriptors for subsequent classifiers are discussed in detail. Three different methodologies for setting the threshold of the classifiers are also evaluated. Finally, an AdaBoost algorithm is used for selecting and combining the classifiers, thus improving the results provided by a single classifier. A large database has been acquired and used to train and test the proposed system and methods, and suitable results have been obtained in comparison with other refer-",
"title": ""
},
{
"docid": "8dd6a3cbe9ddb4c50beb83355db5aa5a",
"text": "Fuzzy logic controllers have gained popularity in the past few decades with highly successful implementation in many fields. Fuzzy logic enables designers to control complex systems more effectively than traditional methods. Teaching students fuzzy logic in a laboratory can be a time-consuming and an expensive task. This paper presents a low-cost educational microcontroller-based tool for fuzzy logic controlled line following mobile robot. The robot is used in the second year of undergraduate teaching in an elective course in the department of computer engineering of the Near East University. Hardware details of the robot and the software implementing the fuzzy logic control algorithm are given in the paper. 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20347",
"title": ""
},
{
"docid": "920c977ce3ed5f310c97b6fcd0f5bef4",
"text": "In this paper, different automatic registration schemes base d on different optimization techniques in conjunction with different similarity measures are compared in term s of accuracy and efficiency. Results from every optimizat ion procedure are quantitatively evaluated with respect to t he manual registration, which is the standard registration method used in clinical practice. The comparison has shown automatic regi st ation schemes based on CD consist of an accurate and reliable method that can be used in clinical ophthalmology, as a satisfactory alternative to the manual method. Key-Words: multimodal image registration, optimization algorithms, sim ilarity metrics, retinal images",
"title": ""
},
{
"docid": "ae3897a20c5dc1479b1746287382e677",
"text": "We present a novel approach to parameterize a mesh with disk topology to the plane in a shape-preserving manner. Our key contribution is a local/global algorithm, which combines a local mapping of each 3D triangle to the plane, using transformations taken from a restricted set, with a global \"stitch\" operation of all triangles, involving a sparse linear system. The local transformations can be taken from a variety of families, e.g. similarities or rotations, generating different types of parameterizations. In the first case, the parameterization tries to force each 2D triangle to be an as-similar-as-possible version of its 3D counterpart. This is shown to yield results identical to those of the LSCM algorithm. In the second case, the parameterization tries to force each 2D triangle to be an as-rigid-as-possible version of its 3D counterpart. This approach preserves shape as much as possible. It is simple, effective, and fast, due to pre-factoring of the linear system involved in the global phase. Experimental results show that our approach provides almost isometric parameterizations and obtains more shape-preserving results than other state-of-the-art approaches. We present also a more general \"hybrid\" parameterization model which provides a continuous spectrum of possibilities, controlled by a single parameter. The two cases described above lie at the two ends of the spectrum. We generalize our local/global algorithm to compute these parameterizations. The local phase may also be accelerated by parallelizing the independent computations per triangle.",
"title": ""
},
{
"docid": "c2bd875199c6da6ce0f7c46349c7c937",
"text": "This chapter presents a survey of contemporary NLP research on Multiword Expressions (MWEs). MWEs pose a huge problem to precise language processing due to their idiosyncratic nature and diversity of their semantic, lexical, and syntactical properties. The chapter begins by considering MWEs definitions, describes some MWEs classes, indicates problems MWEs generate in language applications and their possible solutions, presents methods of MWE encoding in dictionaries and their automatic detection in corpora. The chapter goes into more detail on a particular MWE class called Verb-Noun Constructions (VNCs). Due to their frequency in corpus and unique characteristics, VNCs present a research problem in their own right. Having outlined several approaches to VNC representation in lexicons, the chapter explains the formalism of Lexical Function as a possible VNC representation. Such representation may serve as a tool for VNCs automatic detection in a corpus. The latter is illustrated on Spanish material applying some supervised learning methods commonly used for NLP tasks.",
"title": ""
},
{
"docid": "7e840aa656c74c98ec943d1632cb1332",
"text": "Pixel-based methods offer unique potential for modifying existing interfaces independent of their underlying implementation. Prior work has demonstrated a variety of modifications to existing interfaces, including accessibility enhancements, interface language translation, testing frameworks, and interaction techniques. But pixel-based methods have also been limited in their understanding of the interface and therefore the complexity of modifications they can support. This work examines deeper pixel-level understanding of widgets and the resulting capabilities of pixel-based runtime enhancements. Specifically, we present three new sets of methods: methods for pixel-based modeling of widgets in multiple states, methods for managing the combinatorial complexity that arises in creating a multitude of runtime enhancements, and methods for styling runtime enhancements to preserve consistency with the design of an existing interface. We validate our methods through an implementation of Moscovich et al.'s Sliding Widgets, a novel runtime enhancement that could not have been implemented with prior pixel-based methods.",
"title": ""
},
{
"docid": "25b5775c7f45fac087ff8fed1005f061",
"text": "A vast amount of text data is recorded in the forms of repair verbatim in railway maintenance sectors. Efficient text mining of such maintenance data plays an important role in detecting anomalies and improving fault diagnosis efficiency. However, unstructured verbatim, high-dimensional data, and imbalanced fault class distribution pose challenges for feature selections and fault diagnosis. We propose a bilevel feature extraction-based text mining that integrates features extracted at both syntax and semantic levels with the aim to improve the fault classification performance. We first perform an improved X2 statistics-based feature selection at the syntax level to overcome the learning difficulty caused by an imbalanced data set. Then, we perform a prior latent Dirichlet allocation-based feature selection at the semantic level to reduce the data set into a low-dimensional topic space. Finally, we fuse fault features derived from both syntax and semantic levels via serial fusion. The proposed method uses fault features at different levels and enhances the precision of fault diagnosis for all fault classes, particularly minority ones. Its performance has been validated by using a railway maintenance data set collected from 2008 to 2014 by a railway corporation. It outperforms traditional approaches.",
"title": ""
},
{
"docid": "a0ebe19188abab323122a5effc3c4173",
"text": "In this paper, we present LOADED, an algorithm for outlier detection in evolving data sets containing both continuous and categorical attributes. LOADED is a tunable algorithm, wherein one can trade off computation for accuracy so that domain-specific response times are achieved. Experimental results show that LOADED provides very good detection and false positive rates, which are several times better than those of existing distance-based schemes.",
"title": ""
},
{
"docid": "f3864982e2e03ce4876a6685d74fb84c",
"text": "The central nervous system (CNS) operates by a fine-tuned balance between excitatory and inhibitory signalling. In this context, the inhibitory neurotransmission may be of particular interest as it has been suggested that such neuronal pathways may constitute 'command pathways' and the principle of 'dis-inhibition' leading ultimately to excitation may play a fundamental role (Roberts, E. (1974). Adv. Neurol., 5: 127-143). The neurotransmitter responsible for this signalling is gamma-aminobutyrate (GABA) which was first discovered in the CNS as a curious amino acid (Roberts, E., Frankel, S. (1950). J. Biol. Chem., 187: 55-63) and later proposed as an inhibitory neurotransmitter (Curtis, D.R., Watkins, J.C. (1960). J. Neurochem., 6: 117-141; Krnjevic, K., Schwartz, S. (1967). Exp. Brain Res., 3: 320-336). The present review will describe aspects of GABAergic neurotransmission related to homeostatic mechanisms such as biosynthesis, metabolism, release and inactivation. Additionally, pharmacological and therapeutic aspects of this will be discussed.",
"title": ""
},
{
"docid": "fa2c86d4c0716580415fce8db324fd04",
"text": "One of the key elements in describing a software development method is the roles that are assigned to the members of the software team. This article describes our experience in assigning roles to students who are involved in the development of software projects, working in Extreme Programming teams. This experience, which is based on 25 such projects, teaches us that a personal role for each teammate increases personal responsibility while maintaining the essence of the software development method. In this paper we discuss ways in which different software development methods address the place of roles in a software development team. We also share our experience in refining role specifications and suggest a way to achieve and measure progress by using the perspective of the different roles.",
"title": ""
},
{
"docid": "5b0eef5eed1645ae3d88bed9b20901b9",
"text": "We present a radically new approach to fully homomorphic encryption (FHE) that dramatically improves performance and bases security on weaker assumptions. A central conceptual contribution in our work is a new way of constructing leveled fully homomorphic encryption schemes (capable of evaluating arbitrary polynomial-size circuits), without Gentry’s bootstrapping procedure. Specifically, we offer a choice of FHE schemes based on the learning with error (LWE) or ring-LWE (RLWE) problems that have 2 security against known attacks. For RLWE, we have: • A leveled FHE scheme that can evaluate L-level arithmetic circuits with Õ(λ · L) per-gate computation – i.e., computation quasi-linear in the security parameter. Security is based on RLWE for an approximation factor exponential in L. This construction does not use the bootstrapping procedure. • A leveled FHE scheme that uses bootstrapping as an optimization, where the per-gate computation (which includes the bootstrapping procedure) is Õ(λ), independent of L. Security is based on the hardness of RLWE for quasi-polynomial factors (as opposed to the sub-exponential factors needed in previous schemes). We obtain similar results for LWE, but with worse performance. We introduce a number of further optimizations to our schemes. As an example, for circuits of large width – e.g., where a constant fraction of levels have width at least λ – we can reduce the per-gate computation of the bootstrapped version to Õ(λ), independent of L, by batching the bootstrapping operation. Previous FHE schemes all required Ω̃(λ) computation per gate. At the core of our construction is a much more effective approach for managing the noise level of lattice-based ciphertexts as homomorphic operations are performed, using some new techniques recently introduced by Brakerski and Vaikuntanathan (FOCS 2011). ∗Sponsored by the Air Force Research Laboratory (AFRL). Disclaimer: This material is based on research sponsored by DARPA under agreement number FA8750-11-C-0096 and FA8750-11-2-0225. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government. Approved for Public Release, Distribution Unlimited. †This material is based on research sponsored by DARPA under Agreement number FA8750-11-2-0225. All disclaimers as above apply.",
"title": ""
},
{
"docid": "74beab63358ece0a7b4568dd40a4aea3",
"text": "We consider the problem of learning the canonical parameters specifying an undirected graphical model (Markov random field) from the mean parameters. For graphical models representing a minimal exponential family, the canonical parameters are uniquely determined by the mean parameters, so the problem is feasible in principle. The goal of this paper is to investigate the computational feasibility of this statistical task. Our main result shows that parameter estimation is in general intractable: no algorithm can learn the canonical parameters of a generic pair-wise binary graphical model from the mean parameters in time bounded by a polynomial in the number of variables (unless RP = NP). Indeed, such a result has been believed to be true (see [1]) but no proof was known. Our proof gives a polynomial time reduction from approximating the partition function of the hard-core model, known to be hard, to learning approximate parameters. Our reduction entails showing that the marginal polytope boundary has an inherent repulsive property, which validates an optimization procedure over the polytope that does not use any knowledge of its structure (as required by the ellipsoid method and others).",
"title": ""
},
{
"docid": "c9878a454c91fec094fce02e1ac49348",
"text": "Autonomous walking bipedal machines, possibly useful for rehabilitation and entertainment purposes, need a high energy efficiency, offered by the concept of ‘Passive Dynamic Walking’ (exploitation of the natural dynamics of the robot). 2D passive dynamic bipeds have been shown to be inherently stable, but in the third dimension two problematic degrees of freedom are introduced: yaw and roll. We propose a design for a 3D biped with a pelvic body as a passive dynamic compensator, which will compensate for the undesired yaw and roll motion, and allow the rest of the robot to move as if it were a 2D machine. To test our design, we perform numerical simulations on a multibody model of the robot. With limit cycle analysis we calculate the stability of the robot when walking at its natural speed. The simulation shows that the compensator, indeed, effectively compensates for both the yaw and the roll motion, and that the walker is stable.",
"title": ""
},
{
"docid": "38c32734ecc5d0e1c3bb30f97f9c9798",
"text": "Dengue has emerged as an international public health problem. Reasons for the resurgence of dengue in the tropics and subtropics are complex and include unprecedented urbanization with substandard living conditions, lack of vector control, virus evolution, and international travel. Of all these factors, urbanization has probably had the most impact on the amplification of dengue within a given country, and travel has had the most impact for the spread of dengue from country to country and continent to continent. Epidemics of dengue, their seasonality, and oscillations over time are reflected by the epidemiology of dengue in travelers. Sentinel surveillance of travelers could augment existing national public health surveillance systems.",
"title": ""
}
] |
scidocsrr
|
8c1e5063d774b846e6640cf96c2f012b
|
Measuring the quality of experience of HTTP video streaming
|
[
{
"docid": "220f19bb83b81862277ddf27b1c7d24c",
"text": "Many applications require fast data transfer over high speed and long distance networks. However, standard TCP fails to fully utilize the network capacity in high-speed and long distance networks due to its conservative congestion control (CC) algorithm. Some works have been proposed to improve the connection’s throughput by adopting more aggressive loss-based CC algorithms, which may severely decrease the throughput of regular TCP flows sharing the network path. On the other hand, pure delay-based approaches may not work well if they compete with loss-based flows. In this paper, we propose a novel Compound TCP (CTCP) approach, which is a synergy of delay-based and loss-based approach. More specifically, we add a scalable delay-based component into the standard TCP Reno congestion avoidance algorithm (a.k.a., the loss-based component). The sending rate of CTCP is controlled by both components. This new delay-based component can rapidly increase sending rate when the network path is under utilized, but gracefully retreat in a busy network when a bottleneck queue is built. Augmented with this delay-based component, CTCP provides very good bandwidth scalability and at the same time achieves good TCP-fairness. We conduct extensive packet level simulations and test our CTCP implementation on the Windows platform over a production high-speed network link in the Microsoft intranet. Our simulation and experiments results verify the properties of CTCP.",
"title": ""
}
] |
[
{
"docid": "528e16d5e3c4f5e7edc77d8e5960ba4f",
"text": "Nowadays, a large amount of documents is generated daily. These documents may contain some spelling errors which should be detected and corrected by using a proofreading tool. Therefore, the existence of automatic writing assistance tools such as spell-checkers/correctors could help to improve their quality. Spelling errors could be categorized into five categories. One of them is real-word errors, which are misspelled words that have been wrongly converted into another word in the language. Detection of such errors requires discourse analysis rather than just checking the word in a dictionary. We propose a discourse-aware discriminative model to improve the results of context-sensitive spell-checkers by reranking their resulted n-best list. We augment the proposed reranker into two existing context-sensitive spell-checker systems; one of them is based on statistical machine translation and the other one is based on language model. We choose the keywords of the whole document as contextual features of the model and improve the results of both systems by employing the features in a log-linear reranker system. We evaluated the system on two different languages: English and Persian. The results of the experiments in English language on the Wall street journal test set show improvements of 4.5% and 5.2% in detection and correction recall, respectively, in comparison to the baseline method. The mentioned improvement on recall metric was achieved with comparable precision. We also achieve state-of-the-art performance on the Persian language. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "5422a4e5a82d0636c8069ec58c2753a2",
"text": "In this talk, I will focus on the applications and the latest development of deep learning technologies at Alibaba. More specifically, I will discuss (a) how to handle high dimensional data in deep learning and its application to recommender system, (b) the development of deep learning models for transfer learning and its application to image classification, (c) the development of combinatorial optimization techniques for DNN model compression and its application to large-scale image classification and object detection, and (d) the exploration of deep learning technique for combinatorial optimization and its application to the packing problem in shipping industry. I will conclude my talk with a discussion of new directions for deep learning that are under development at Alibaba.",
"title": ""
},
{
"docid": "840555a134e7606f1f3caa24786c6550",
"text": "Psychological research results have confirmed that people can have different emotional reactions to different visual stimuli. Several papers have been published on the problem of visual emotion analysis. In particular, attempts have been made to analyze and predict people’s emotional reaction towards images. To this end, different kinds of hand-tuned features are proposed. The results reported on several carefully selected and labeled small image data sets have confirmed the promise of such features. While the recent successes of many computer vision related tasks are due to the adoption of Convolutional Neural Networks (CNNs), visual emotion analysis has not achieved the same level of success. This may be primarily due to the unavailability of confidently labeled and relatively large image data sets for visual emotion analysis. In this work, we introduce a new data set, which started from 3+ million weakly labeled images of different emotions and ended up 30 times as large as the current largest publicly available visual emotion data set. We hope that this data set encourages further research on visual emotion analysis. We also perform extensive benchmarking analyses on this large data set using the state of the art methods including CNNs.",
"title": ""
},
{
"docid": "6a602e4f48c0eb66161bce46d53f0409",
"text": "In this paper, we propose three metrics for detecting botnets through analyzing their behavior. Our social infrastructure (i.e., the Internet) is currently experiencing the danger of bots' malicious activities as the scale of botnets increases. Although it is imperative to detect botnet to help protect computers from attacks, effective metrics for botnet detection have not been adequately researched. In this work we measure enormous amounts of traffic passing through the Asian Internet Interconnection Initiatives (AIII) infrastructure. To validate the effectiveness of our proposed metrics, we analyze measured traffic in three experiments. The experimental results reveal that our metrics are applicable for detecting botnets, but further research is needed to refine their performance",
"title": ""
},
{
"docid": "83f14923970c83a55152464179e6bae9",
"text": "Urine drug screening can detect cases of drug abuse, promote workplace safety, and monitor drugtherapy compliance. Compliance testing is necessary for patients taking controlled drugs. To order and interpret these tests, it is required to know of testing modalities, kinetic of drugs, and different causes of false-positive and false-negative results. Standard immunoassay testing is fast, cheap, and the preferred primarily test for urine drug screening. This method reliably detects commonly drugs of abuse such as opiates, opioids, amphetamine/methamphetamine, cocaine, cannabinoids, phencyclidine, barbiturates, and benzodiazepines. Although immunoassays are sensitive and specific to the presence of drugs/drug metabolites, false negative and positive results may be created in some cases. Unexpected positive test results should be checked with a confirmatory method such as gas chromatography/mass spectrometry. Careful attention to urine collection methods and performing the specimen integrity tests can identify some attempts by patients to produce false-negative test results.",
"title": ""
},
{
"docid": "d44daf0c7f045ef388d8b435a705e0b2",
"text": "Mapping the relationship between gene expression and psychopathology is proving to be among the most promising new frontiers for advancing the understanding, treatment, and prevention of mental disorders. Each cell in the human body contains some 23,688 genes, yet only a tiny fraction of a cell’s genes are active or “expressed” at any given moment. The interactions of biochemical, psychological, and environmental factors influencing gene expression are complex, yet relatively accessible technologies for assessing gene expression have allowed the identification of specific genes implicated in a range of psychiatric disorders, including depression, anxiety, and schizophrenia. Moreover, successful psychotherapeutic interventions have been shown to shift patterns of gene expression. Five areas of biological change in successful psychotherapy that are dependent upon precise shifts in gene expression are identified in this paper. Psychotherapy ameliorates (a) exaggerated limbic system responses to innocuous stimuli, (b) distortions in learning and memory, (c) imbalances between sympathetic and parasympathetic nervous system activity, (d) elevated levels of cortisol and other stress hormones, and (e) impaired immune functioning. The thesis of this paper is that psychotherapies which utilize non-invasive somatic interventions may yield greater precision and power in bringing about therapeutically beneficial shifts in gene expression that control these biological markers. The paper examines the manual stimulation of acupuncture points during psychological exposure as an example of such a somatic intervention. For each of the five areas, a testable proposition is presented to encourage research that compares acupoint protocols with conventional therapies in catalyzing advantageous shifts in gene expression.",
"title": ""
},
{
"docid": "9e1998a0df3258b444212e22d610e72f",
"text": "PRIOR WORK We introduce the concept of unconstrained real-time 3D facial performance capture through explicit semantic segmentation in the RGB input. To ensure robustness, cutting edge supervised learning approaches rely on large training datasets of face images captured in the wild. While impressive tracking quality has been demonstrated for faces that are largely visible, any occlusion due to hair, accessories, or hand-to-face gestures would result in significant visual artifacts and loss of tracking accuracy. The modeling of occlusions has been mostly avoided due to its immense space of appearance variability. To address this curse of high dimensionality, we perform tracking in unconstrained images assuming non-face regions can be fully masked out. Along with recent breakthroughs in deep learning, we demonstrate that pixel-level facial segmentation is possible in real-time by repurposing convolutional neural networks designed originally for general semantic segmentation. We develop an efficient architecture based on a two-stream deconvolution network with complementary characteristics, and introduce carefully designed training samples and data augmentation strategies for improved segmentation accuracy and robustness. We adopt a state-of-the-art regression-based facial tracking framework with segmented face images as training, and demonstrate accurate and uninterrupted facial performance capture in the presence of extreme occlusion and even side views. Furthermore, the resulting segmentation can be directly used to composite partial 3D face models on the input images and enable seamless facial manipulation tasks, such as virtual make-up or face replacement. SEGMENTATION NETWORK pooling 5 output probability map convolution network deconvolution network DeconvNet VGG-16 FCN-8s fusion input frame + + pooling 3 pooling 4 + facial performance capture semantical segmentation RGB input [Cao et al. 2014] RGB-D input [Hsieh et al. 2015] FCN [Long et al. 2015] DeconvNet [Noh et al. 2015] RESULTS face data hand data cropping / occlusion negative samples input image labeled segmentation input image with occlusion augmentation",
"title": ""
},
{
"docid": "8dc3bcecacd940036090a08d942596ab",
"text": "Pregnancy-related pelvic girdle pain (PRPGP) has a prevalence of approximately 45% during pregnancy and 20-25% in the early postpartum period. Most women become pain free in the first 12 weeks after delivery, however, 5-7% do not. In a large postpartum study of prevalence for urinary incontinence (UI) [Wilson, P.D., Herbison, P., Glazener, C., McGee, M., MacArthur, C., 2002. Obstetric practice and urinary incontinence 5-7 years after delivery. ICS Proceedings of the Neurourology and Urodynamics, vol. 21(4), pp. 284-300] found that 45% of women experienced UI at 7 years postpartum and that 27% who were initially incontinent in the early postpartum period regained continence, while 31% who were continent became incontinent. It is apparent that for some women, something happens during pregnancy and delivery that impacts the function of the abdominal canister either immediately, or over time. Current evidence suggests that the muscles and fascia of the lumbopelvic region play a significant role in musculoskeletal function as well as continence and respiration. The combined prevalence of lumbopelvic pain, incontinence and breathing disorders is slowly being understood. It is also clear that synergistic function of all trunk muscles is required for loads to be transferred effectively through the lumbopelvic region during multiple tasks of varying load, predictability and perceived threat. Optimal strategies for transferring loads will balance control of movement while maintaining optimal joint axes, maintain sufficient intra-abdominal pressure without compromising the organs (preserve continence, prevent prolapse or herniation) and support efficient respiration. Non-optimal strategies for posture, movement and/or breathing create failed load transfer which can lead to pain, incontinence and/or breathing disorders. Individual or combined impairments in multiple systems including the articular, neural, myofascial and/or visceral can lead to non-optimal strategies during single or multiple tasks. Biomechanical aspects of the myofascial piece of the clinical puzzle as it pertains to the abdominal canister during pregnancy and delivery, in particular trauma to the linea alba and endopelvic fascia and/or the consequence of postpartum non-optimal strategies for load transfer, is the focus of the first two parts of this paper. A possible physiological explanation for fascial changes secondary to altered breathing behaviour during pregnancy is presented in the third part. A case study will be presented at the end of this paper to illustrate the clinical reasoning necessary to discern whether conservative treatment or surgery is necessary for restoration of function of the abdominal canister in a woman with postpartum diastasis rectus abdominis (DRA).",
"title": ""
},
{
"docid": "ff5700d97ad00fcfb908d90b56f6033f",
"text": "How to design a secure steganography method is the problem that researchers have always been concerned about. Traditionally, the steganography method is designed in a heuristic way which does not take into account the detection side (steganalysis) fully and automatically. In this paper, we propose a new strategy that generates more suitable and secure covers for steganography with adversarial learning scheme, named SSGAN. The proposed architecture has one generative network called G, and two discriminative networks called D and S, among which the former evaluates the visual quality of the generated images for steganography and the latter assesses their suitableness for information hiding. Different from the existing work, we use WGAN instead of GAN for the sake of faster convergence speed, more stable training, and higher quality images, and also re-design the S net with more sophisticated steganalysis network. The experimental results prove the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "86d8b5fd2998557858205a6e6e1ed046",
"text": "Advances in the information and communication technologies have led to the emergence of Internet of Thing (IoT). IoT allows many physical devices to capture transmit data, through the internet, providing more data interoperability methods. Nowadays IoT plays an important role not only in communication, but also in monitoring, recording, storage and display. Hence the latest trend in Healthcare communication method using IoT is adapted. Monitored on a continual basis, aggregated and effectively analyzed-such information can bring about a massive positive transformation in the field of healthcare. Our matter of concern in this project is to focus on the development and implementation of an effective healthcare monitoring system based on IoT. The proposed system monitors the vital health parameters and transmits the data through a wireless communication, which is further transferred to a network via a Wi-Fi module. The data can be accessed anytime promoting the reception of the current status of the patient. In case any abnormal behavior or any vital signs are recognized, the caretaker, as well as the doctors are notified immediately through a message service or an audio signaling device (buzzer). In order to design an efficient remote monitoring system, security plays an important part. Cloud computing and password protected Wi-Fi module handles authentication, privacy and security of patient details by allowing restricted access to the database. Hence the system provides quality healthcare to all. This paper is a review of Healthcare Monitoring system using IoT.",
"title": ""
},
{
"docid": "9188a5da5d00592299b5a5268ed579ac",
"text": "We introduce word vectors for the construction domain. Our vectors were obtained by running word2vec on an 11M-word corpus that we created from scratch by leveraging freely-accessible online sources of construction-related text. We first explore the embedding space and show that our vectors capture meaningful constructionspecific concepts. We then evaluate the performance of our vectors against that of ones trained on a 100B-word corpus (Google News) within the framework of an injury report classification task. Without any parameter tuning, our embeddings give competitive results, and outperform the Google News vectors in many cases. Using a keyword-based compression of the reports also leads to a significant speed-up with only a limited loss in performance. We release our corpus and the data set we created for the classification task as publicly available, in the hope that they will be used by future studies for benchmarking and building on our work.",
"title": ""
},
{
"docid": "e041d7f54e1298d4aa55edbfcbda71ad",
"text": "Charts are common graphic representation for scientific data in technical and business papers. We present a robust system for detecting and recognizing bar charts. The system includes three stages, preprocessing, detection and recognition. The kernel algorithm in detection is newly developed Modified Probabilistic Hough Transform algorithm for parallel lines clusters detection. The main algorithms in recognition are bar pattern reconstruction and text primitives grouping in the Hough space which are also original. The Experiments show the system can also recognize slant bar charts, or even hand-drawn charts.",
"title": ""
},
{
"docid": "1127b964ad114909a2aa8d78eb134a78",
"text": "RFID technology is gaining adoption on an increasin g scale for tracking and monitoring purposes. Wide deployments of RFID devices will soon generate an unprecedented volume of data. Emerging applications require the RFID data to be f ilt red and correlated for complex pattern detection and transf ormed to events that provide meaningful, actionable informat ion to end applications. In this work, we design and develop S ASE, a complex event processing system that performs such dat ainformation transformation over real-time streams. We design a complex event language for specifying application l gic for such transformation, devise new query processing techniq ues to efficiently implement the language, and develop a comp rehensive system that collects, cleans, and processes RFID da ta for delivery of relevant, timely information as well as stor ing necessary data for future querying. We demonstrate an initial prototype of SASE through a real-world retail management scenari o.",
"title": ""
},
{
"docid": "f65c3e60dbf409fa2c6e58046aad1e1c",
"text": "The gut microbiota is essential for the development and regulation of the immune system and the metabolism of the host. Germ-free animals have altered immunity with increased susceptibility to immunologic diseases and show metabolic alterations. Here, we focus on two of the major immune-mediated microbiota-influenced components that signal far beyond their local environment. First, the activation or suppression of the toll-like receptors (TLRs) by microbial signals can dictate the tone of the immune response, and they are implicated in regulation of the energy homeostasis. Second, we discuss the intestinal mucosal surface is an immunologic component that protects the host from pathogenic invasion, is tightly regulated with regard to its permeability and can influence the systemic energy balance. The short chain fatty acids are a group of molecules that can both modulate the intestinal barrier and escape the gut to influence systemic health. As modulators of the immune response, the microbiota-derived signals influence functions of distant organs and can change susceptibility to metabolic diseases.",
"title": ""
},
{
"docid": "051188b0b4a6bdc31a0130a16527ce86",
"text": "Considerations of microalgae as a source offood and biochemicals began in the early 1940's, and in 1952 the first Algae Mass-Culture Symposium was held (Burlew, 1953). Since then, a number of microalgae have been suggested and evaluated for their suitability for commercial exploitation. These include Chlorella, Scenedesmus and Spirulina (e.g., Soeder, 1976; Kawaguchi, 1980; Becker & Venkataraman, 1980) and small commercial operations culturing some of these algae for food are underway in various parts of the world. The extremely halophilic unicellular green alga Dunaliella salina (Chlorophyta, Volvocales) has been proposed as a source of its osmoregulatory solute, glycerol and the pigment f3-carotene (Masyuk, 1968; Aasen, et a11969; Ben-Amotz & A vron, 1980). Much research on the commercial potential of this algae and its products has been undertaken (e.g., Williams, et al. 1978; Chen & Chi, 1981) and trial operations have been established in the USSR (Masyuk, 1968) and in Israel (Ben-Amotz & A vron, 1980). Since 1978, we in Australia have been working also, to examine the feasibility of using large-scale culture of Dunaliella salina as a commercial source",
"title": ""
},
{
"docid": "77c2843058856b8d7a582d3b0349b856",
"text": "In this paper, an S-band dual circular polarized (CP) spherical conformal phased array antenna (SPAA) is designed. It has the ability to scan a beam within the hemisphere coverage. There are 23 elements uniformly arranged on the hemispherical dome. The design process of the SPAA is presented in detail. Three different kinds of antenna elements are compared. The gain of the SPAA is more than 13 dBi and the gain flatness is less than 1 dB within the scanning range. The measured result is consistent well with the simulated one.",
"title": ""
},
{
"docid": "2fc0779078bc5be4ed21f87ead97458c",
"text": "This paper presents for the first time an X-band antenna array with integrated silicon germanium low noise amplifiers (LNA) and 3-bit phase shifters (PS). LNAs and PSs were successfully integrated onto an 8 × 2 lightweight antenna utilizing a multilayer liquid crystal polymer (LCP) feed substrate laminated with a duroid antenna layer. A baseline passive 8×2 antenna is measured along with a SiGe integrated 8×2 receive antenna for comparison of results. The active antenna array weighs only 3.5 ounces and consumes 53 mW of dc power. Successful comparisons of the measured and simulated results verify a working phased array with a return loss better than 10 dB across the frequency band of 9.25 GHz-9.75 GHz. A comparison of radiation patterns for the 8×2 baseline antenna and the 8×2 SiGe integrated antenna show a 25 dB increase in gain (ΔG). The SiGe integrated antenna demonstrated a predictable beam steering capability of ±41°. Combined antenna and receiver performance yielded a merit G/T of -9.1 dB/K and noise figure of 5.6 dB.",
"title": ""
},
{
"docid": "3e94030eb03806d79c5e66aa90408fbb",
"text": "The sampling rate of the sensors in wireless sensor networks (WSNs) determines the rate of its energy consumption since most of the energy is used in sampling and transmission. To save the energy in WSNs and thus prolong the network lifetime, we present a novel approach based on the compressive sensing (CS) framework to monitor 1-D environmental information in WSNs. The proposed technique is based on CS theory to minimize the number of samples taken by sensor nodes. An innovative feature of our approach is a new random sampling scheme that considers the causality of sampling, hardware limitations and the trade-off between the randomization scheme and computational complexity. In addition, a sampling rate indicator (SRI) feedback scheme is proposed to enable the sensor to adjust its sampling rate to maintain an acceptable reconstruction performance while minimizing the number of samples. A significant reduction in the number of samples required to achieve acceptable reconstruction error is demonstrated using real data gathered by a WSN located in the Hessle Anchorage of the Humber Bridge.",
"title": ""
},
{
"docid": "32b4d99238f6777399909e35f501a5d3",
"text": "BACKGROUND\nRecent technical developments have focused on the full automation of urinalyses, however the manual microscopic analysis of urine sediment is considered the reference method. The aim of this study was to compare the performances of the LabUMat-UriSed and the H800-FUS100 with manual microscopy, and with each other.\n\n\nMETHODS\nThe urine sediments of 332 urine samples were examined by these two devices (LabUMat-UriSed, H800-FUS100) and manual microscopy.\n\n\nRESULTS\nThe reproducibility of the analyzers, UriSed and Fus100 (4.1-28.5% and 4.7-21.2%, respectively), was better than that with manual microscopy (8.5-33.3%). The UriSed was more sensitive for leukocytes (82%), while the Fus-100 was more sensitive for erythrocyte cell counting (73%). There were moderate correlations between manual microscopy and the two devices, UriSed and Fus100, for erythrocyte (r = 0.496 and 0.498, respectively) and leukocyte (r = 0.597 and 0.599, respectively) cell counting however the correlation between the two devices was much better for erythrocyte (r = 0.643) and for leukocyte (r = 0.767) cell counting.\n\n\nCONCLUSION\nIt can be concluded that these two devices showed similar performances. They were time-saving and standardized techniques, especially for reducing preanalytical errors such as the study time, centrifugation, and specimen volume for sedimentary analysis; however, the automated systems are still inadequate for classifying the cells that are present in pathological urine specimens.",
"title": ""
},
{
"docid": "414bb4a869a900066806fa75edc38bd6",
"text": "For nearly a century, scholars have sought to understand, measure, and explain giftedness. Succeeding theories and empirical investigations have often built on earlier work, complementing or sometimes clashing over conceptions of talent or contesting the mechanisms of talent development. Some have even suggested that giftedness itself is a misnomer, mistaken for the results of endless practice or social advantage. In surveying the landscape of current knowledge about giftedness and gifted education, this monograph will advance a set of interrelated arguments: The abilities of individuals do matter, particularly their abilities in specific talent domains; different talent domains have different developmental trajectories that vary as to when they start, peak, and end; and opportunities provided by society are crucial at every point in the talent-development process. We argue that society must strive to promote these opportunities but that individuals with talent also have some responsibility for their own growth and development. Furthermore, the research knowledge base indicates that psychosocial variables are determining influences in the successful development of talent. Finally, outstanding achievement or eminence ought to be the chief goal of gifted education. We assert that aspiring to fulfill one's talents and abilities in the form of transcendent creative contributions will lead to high levels of personal satisfaction and self-actualization as well as produce yet unimaginable scientific, aesthetic, and practical benefits to society. To frame our discussion, we propose a definition of giftedness that we intend to be comprehensive. Giftedness is the manifestation of performance that is clearly at the upper end of the distribution in a talent domain even relative to other high-functioning individuals in that domain. Further, giftedness can be viewed as developmental in that in the beginning stages, potential is the key variable; in later stages, achievement is the measure of giftedness; and in fully developed talents, eminence is the basis on which this label is granted. Psychosocial variables play an essential role in the manifestation of giftedness at every developmental stage. Both cognitive and psychosocial variables are malleable and need to be deliberately cultivated. Our goal here is to provide a definition that is useful across all domains of endeavor and acknowledges several perspectives about giftedness on which there is a fairly broad scientific consensus. Giftedness (a) reflects the values of society; (b) is typically manifested in actual outcomes, especially in adulthood; (c) is specific to domains of endeavor; (d) is the result of the coalescing of biological, pedagogical, psychological, and psychosocial factors; and (e) is relative not just to the ordinary (e.g., a child with exceptional art ability compared to peers) but to the extraordinary (e.g., an artist who revolutionizes a field of art). In this monograph, our goal is to review and summarize what we have learned about giftedness from the literature in psychological science and suggest some directions for the field of gifted education. We begin with a discussion of how giftedness is defined (see above). In the second section, we review the reasons why giftedness is often excluded from major conversations on educational policy, and then offer rebuttals to these arguments. In spite of concerns for the future of innovation in the United States, the education research and policy communities have been generally resistant to addressing academic giftedness in research, policy, and practice. The resistance is derived from the assumption that academically gifted children will be successful no matter what educational environment they are placed in, and because their families are believed to be more highly educated and hold above-average access to human capital wealth. These arguments run counter to psychological science indicating the need for all students to be challenged in their schoolwork and that effort and appropriate educational programing, training and support are required to develop a student's talents and abilities. In fact, high-ability students in the United States are not faring well on international comparisons. The scores of advanced students in the United States with at least one college-educated parent were lower than the scores of students in 16 other developed countries regardless of parental education level. In the third section, we summarize areas of consensus and controversy in gifted education, using the extant psychological literature to evaluate these positions. Psychological science points to several variables associated with outstanding achievement. The most important of these include general and domain-specific ability, creativity, motivation and mindset, task commitment, passion, interest, opportunity, and chance. Consensus has not been achieved in the field however in four main areas: What are the most important factors that contribute to the acuities or propensities that can serve as signs of potential talent? What are potential barriers to acquiring the \"gifted\" label? What are the expected outcomes of gifted education? And how should gifted students be educated? In the fourth section, we provide an overview of the major models of giftedness from the giftedness literature. Four models have served as the foundation for programs used in schools in the United States and in other countries. Most of the research associated with these models focuses on the precollegiate and early university years. Other talent-development models described are designed to explain the evolution of talent over time, going beyond the school years into adult eminence (but these have been applied only by out-of-school programs as the basis for educating gifted students). In the fifth section we present methodological challenges to conducting research on gifted populations, including definitions of giftedness and talent that are not standardized, test ceilings that are too low to measure progress or growth, comparison groups that are hard to find for extraordinary individuals, and insufficient training in the use of statistical methods that can address some of these challenges. In the sixth section, we propose a comprehensive model of trajectories of gifted performance from novice to eminence using examples from several domains. This model takes into account when a domain can first be expressed meaningfully-whether in childhood, adolescence, or adulthood. It also takes into account what we currently know about the acuities or propensities that can serve as signs of potential talent. Budding talents are usually recognized, developed, and supported by parents, teachers, and mentors. Those individuals may or may not offer guidance for the talented individual in the psychological strengths and social skills needed to move from one stage of development to the next. We developed the model with the following principles in mind: Abilities matter, domains of talent have varying developmental trajectories, opportunities need to be provided to young people and taken by them as well, psychosocial variables are determining factors in the successful development of talent, and eminence is the aspired outcome of gifted education. In the seventh section, we outline a research agenda for the field. This agenda, presented in the form of research questions, focuses on two central variables associated with the development of talent-opportunity and motivation-and is organized according to the degree to which access to talent development is high or low and whether an individual is highly motivated or not. Finally, in the eighth section, we summarize implications for the field in undertaking our proposed perspectives. These include a shift toward identification of talent within domains, the creation of identification processes based on the developmental trajectories of talent domains, the provision of opportunities along with monitoring for response and commitment on the part of participants, provision of coaching in psychosocial skills, and organization of programs around the tools needed to reach the highest possible levels of creative performance or productivity.",
"title": ""
}
] |
scidocsrr
|
f37802285fe1c5aa36f12e3d75f9a9ce
|
Active sample selection in scalar fields exhibiting non-stationary noise with parametric heteroscedastic Gaussian process regression
|
[
{
"docid": "444e84c8c46c066b0a78ad4a743a9c78",
"text": "This paper presents a novel Gaussian process (GP) approach to regression with input-dependent noise rates. We follow Goldberg et al.'s approach and model the noise variance using a second GP in addition to the GP governing the noise-free output value. In contrast to Goldberg et al., however, we do not use a Markov chain Monte Carlo method to approximate the posterior noise variance but a most likely noise approach. The resulting model is easy to implement and can directly be used in combination with various existing extensions of the standard GPs such as sparse approximations. Extensive experiments on both synthetic and real-world data, including a challenging perception problem in robotics, show the effectiveness of most likely heteroscedastic GP regression.",
"title": ""
},
{
"docid": "528d0d198bb092ece6f824d4e1912bcd",
"text": "Monitoring marine ecosystems is challenging due to the dynamic and unpredictable nature of environmental phenomena. In this work we survey a series of techniques used in information gathering that can be used to increase experts' understanding of marine ecosystems through dynamic monitoring. To achieve this, an underwater glider simulator is constructed, and four different path planning algorithms are investigated: Boustrophendon paths, a gradient based approach, a Level-Sets method, and Sequential Bayesian Optimization. Each planner attempts to maximize the time the glider spends in an area where ocean variables are above a threshold value of interest. To emulate marine ecosystem sensor data, ocean temperatures are used. The planners are simulated 50 times each at random starting times and locations. After validation through simulation, we show that informed decision making improves performance, but more accurate prediction of ocean conditions would be necessary to benefit from long horizon lookahead planning.",
"title": ""
}
] |
[
{
"docid": "3cfa80815c0e4835e4e081348717459a",
"text": "β-defensins are small cationic peptides, with potent immunoregulatory and antimicrobial activity which are produced constitutively and inducibly by eukaryotic cells. This study profiles the expression of a cluster of 19 novel defensin genes which spans 320 kb on chromosome 13 in Bos taurus. It also assesses the genetic variation in these genes between two divergently selected cattle breeds. Using quantitative real-time PCR (qRT-PCR), all 19 genes in this cluster were shown to be expressed in the male genital tract and 9 in the female genital tract, in a region-specific manner. These genes were sequenced in Norwegian Red (NR) and Holstein-Friesian (HF) cattle for population genetic analysis. Of the 17 novel single nucleotide polymorphisms (SNPs) identified, 7 were non-synonymous, 6 synonymous and 4 outside the protein coding region. Significant frequency differences in SNPs in bovine β-defensins (BBD) 115, 117, 121, and 122 were detected between the two breeds, which was also reflected at the haplotype level (P < 0.05). There was clear segregation of the haplotypes into two blocks on chromosome 13 in both breeds, presumably due to historical recombination. This study documents genetic variation in this β-defensin gene cluster between Norwegian Red and Holstein-Friesian cattle which may result from divergent selection for production and fertility traits in these two breeds. Regional expression in the epididymis and fallopian tube suggests a potential reproductive-immunobiology role for these genes in cattle.",
"title": ""
},
{
"docid": "f81cd7e1cfbfc15992fba9368c1df30b",
"text": "The most challenging issue of conventional Time Amplifiers (TAs) is their limited Dynamic Range (DR). This paper presents a mathematical analysis to clarify principle of operation of conventional 2× TA's. The mathematical derivations release strength reduction of the current sources of the TA is the simplest way to increase DR. Besides, a new technique is presented to expand the Dynamic Range (DR) of conventional 2× TAs. Proposed technique employs current subtraction in place of changing strength of current sources using conventional gain compensation methods, which results in more stable gain over a wider DR. The TA is simulated using Spectre-rf in TSMC 0.18um COMS technology. DR of the 2× TA is expanded to 300ps only with 9% gain error while it consumes only 28uW from a 1.2V supply voltage.",
"title": ""
},
{
"docid": "969a8e447fb70d22a7cbabe7fc47a9c9",
"text": "A novel multi-level AC six-phase motor drive is developed in this paper. The scheme is based on three conventional 2-level three-phase voltage source inverters (VSIs) supplying the open-end windings of a dual three-phase motor (six-phase induction machine). The proposed inverter is capable of supply the machine with multi-level voltage waveforms. The developed system is compared with the conventional solution and it is demonstrated that the drive system permits to reduce the harmonic distortion of the machine currents, to reduce the total semiconductor losses and to decrease the power processed by converter switches. The system model and the Pulse-Width Modulation (PWM) strategy are presented. The experimental verification was obtained by using IGBTs with dedicated drives and a digital signal processor (DSP) with plug-in boards and sensors.",
"title": ""
},
{
"docid": "d9a9339672121fb6c3baeb51f11bfcd8",
"text": "The VISION (video indexing for searching over networks) digital video library system has been developed in our laboratory as a testbed for evaluating automatic and comprehensive mechanisms for video archive creation and content-based search, ®ltering and retrieval of video over local and wide area networks. In order to provide access to video footage within seconds of broadcast, we have developed a new pipelined digital video processing architecture which is capable of digitizing, processing, indexing and compressing video in real time on an inexpensive general purpose computer. These videos were automatically partitioned into short scenes using video, audio and closed-caption information. The resulting scenes are indexed based on their captions and stored in a multimedia database. A clientserver-based graphical user interface was developed to enable users to remotely search this archive and view selected video segments over networks of dierent bandwidths. Additionally, VISION classi®es the incoming videos with respect to a taxonomy of categories and will selectively send users videos which match their individual pro®les. # 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7e4a222322346abc281d72534902d707",
"text": "Humic substances (HS) have been widely recognized as a plant growth promoter mainly by changes on root architecture and growth dynamics, which result in increased root size, branching and/or greater density of root hair with larger surface area. Stimulation of the H+-ATPase activity in cell membrane suggests that modifications brought about by HS are not only restricted to root structure, but are also extended to the major biochemical pathways since the driving force for most nutrient uptake is the electrochemical gradient across the plasma membrane. Changes on root exudation profile, as well as primary and secondary metabolism were also observed, though strongly dependent on environment conditions, type of plant and its ontogeny. Proteomics and genomic approaches with diverse plant species subjected to HS treatment had often shown controversial patterns of protein and gene expression. This is a clear indication that HS effects of plants are complex and involve non-linear, cross-interrelated and dynamic processes that need be treated with an interdisciplinary view. Being the humic associations recalcitrant to microbiological attack, their use as vehicle to introduce beneficial selected microorganisms to crops has been proposed. This represents a perspective for a sort of new biofertilizer designed for a sustainable agriculture, whereby plants treated with HS become more susceptible to interact with bioinoculants, while HS may concomitantly modify the structure/activity of the microbial community in the rhizosphere compartment. An enhanced knowledge of the effects on plants physiology and biochemistry and interaction with rhizosphere and endophytic microbes should lead to achieve increased crop productivity through a better use of HS inputs in Agriculture.",
"title": ""
},
{
"docid": "cefabe1b4193483d258739674b53f773",
"text": "This paper describes design and development of omnidirectional magnetic climbing robots with high maneuverability for inspection of ferromagnetic 3D human made structures. The main focus of this article is design, analysis and implementation of magnetic omnidirectional wheels for climbing robots. We discuss the effect of the associated problems of such wheels, e.g. vibration, on climbing robots. This paper also describes the evolution of magnetic omnidirectional wheels throughout the design and development of several solutions, resulting in lighter and smaller wheels which have less vibration and adapt better to smaller radius structures. These wheels are installed on a chassis which adapts passively to flat and curved structures, enabling the robot to climb and navigate on such structures.",
"title": ""
},
{
"docid": "1ebdcfe9c477e6a29bfce1ddeea960aa",
"text": "Bitcoin—a cryptocurrency built on blockchain technology—was the first currency not controlled by a single entity.1 Initially known to a few nerds and criminals,2 bitcoin is now involved in hundreds of thousands of transactions daily. Bitcoin has achieved values of more than US$15,000 per coin (at the end of 2017), and this rising value has attracted attention. For some, bitcoin is digital fool’s gold. For others, its underlying blockchain technology heralds the dawn of a new digital era. Both views could be right. The fortunes of cryptocurrencies don’t define blockchain. Indeed, the biggest effects of blockchain might lie beyond bitcoin, cryptocurrencies, or even the economy. Of course, the technical questions about blockchain have not all been answered. We still struggle to overcome the high levels of processing intensity and energy use. These questions will no doubt be confronted over time. If the technology fails, the future of blockchain will be different. In this article, I’ll assume technical challenges will be solved, and although I’ll cover some technical issues, these aren’t the main focus of this paper. In a 2015 article, “The Trust Machine,” it was argued that the biggest effects of blockchain are on trust.1 The article referred to public trust in economic institutions, that is, that such organizations and intermediaries will act as expected. When they don’t, trust deteriorates. Trust in economic institutions hasn’t recovered from the recession of 2008.3 Technology can exacerbate distrust: online trades with distant counterparties can make it hard to settle disputes face to face. Trusted intermediaries can be hard to find, and that’s where blockchain can play a part. Permanent record-keeping that can be sequentially updated but not erased creates visible footprints of all activities conducted on the chain. This reduces the uncertainty of alternative facts or truths, thus creating the “trust machine” The Economist describes. As trust changes, so too does governance.4 Vitalik Buterin of the Ethereum blockchain platform calls blockchain “a magic computer” to which anyone can upload self-executing programs.5 All states of every Beyond Bitcoin: The Rise of Blockchain World",
"title": ""
},
{
"docid": "061c8e8e9d6a360c36158193afee5276",
"text": "Distribution transformers are one of the most important equipment in power network. Because of, the large number of transformers distributed over a wide area in power electric systems, the data acquisition and condition monitoring is a important issue. This paper presents design and implementation of a mobile embedded system and a novel software to monitor and diagnose condition of transformers, by record key operation indictors of a distribution transformer like load currents, transformer oil, ambient temperatures and voltage of three phases. The proposed on-line monitoring system integrates a Global Service Mobile (GSM) Modem, with stand alone single chip microcontroller and sensor packages. Data of operation condition of transformer receives in form of SMS (Short Message Service) and will be save in computer server. Using the suggested online monitoring system will help utility operators to keep transformers in service for longer of time.",
"title": ""
},
{
"docid": "3e2df9d6ed3cad12fcfda19d62a0b42e",
"text": "We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.",
"title": ""
},
{
"docid": "f0da127d64aa6e9c87d4af704f049d07",
"text": "The introduction of the blue-noise spectra-high-frequency white noise with minimal energy at low frequencies-has had a profound impact on digital halftoning for binary display devices, such as inkjet printers, because it represents an optimal distribution of black and white pixels producing the illusion of a given shade of gray. The blue-noise model, however, does not directly translate to printing with multiple ink intensities. New multilevel printing and display technologies require the development of corresponding quantization algorithms for continuous tone images, namely multitoning. In order to define an optimal distribution of multitone pixels, this paper develops the theory and design of multitone, blue-noise dithering. Here, arbitrary multitone dot patterns are modeled as a layered superposition of stack-constrained binary patterns. Multitone blue-noise exhibits minimum energy at low frequencies and a staircase-like, ascending, spectral pattern at higher frequencies. The optimum spectral profile is described by a set of principal frequencies and amplitudes whose calculation requires the definition of a spectral coherence structure governing the interaction between patterns of dots of different intensities. Efficient algorithms for the generation of multitone, blue-noise dither patterns are also introduced.",
"title": ""
},
{
"docid": "79b91aae9a2911e48026f857e88149f4",
"text": "Fine-grained visual recognition is challenging because it highly relies on the modeling of various semantic parts and fine-grained feature learning. Bilinear pooling based models have been shown to be effective at fine-grained recognition, while most previous approaches neglect the fact that inter-layer part feature interaction and fine-grained feature learning are mutually correlated and can reinforce each other. In this paper, we present a novel model to address these issues. First, a crosslayer bilinear pooling approach is proposed to capture the inter-layer part feature relations, which results in superior performance compared with other bilinear pooling based approaches. Second, we propose a novel hierarchical bilinear pooling framework to integrate multiple cross-layer bilinear features to enhance their representation capability. Our formulation is intuitive, efficient and achieves state-of-the-art results on the widely used fine-grained recognition datasets.",
"title": ""
},
{
"docid": "8756ef13409ae696ffaf034c873fdaf6",
"text": "This paper addresses a data-driven prognostics method for the estimation of the Remaining Useful Life (RUL) and the associated confidence value of bearings. The proposed method is based on the utilization of the Wavelet Packet Decomposition (WPD) technique, and the Mixture of Gaussians Hidden Markov Models (MoG-HMM). The method relies on two phases: an off-line phase, and an on-line phase. During the first phase, the raw data provided by the sensors are first processed to extract features in the form of WPD coefficients. The extracted features are then fed to dedicated learning algorithms to estimate the parameters of a corresponding MoG-HMM, which best fits the degradation phenomenon. The generated model is exploited during the second phase to continuously assess the current health state of the physical component, and to estimate its RUL value with the associated confidence. The developed method is tested on benchmark data taken from the “NASA prognostics data repository” related to several experiments of failures on bearings done under different operating conditions. Furthermore, the method is compared to traditional time-feature prognostics and simulation results are given at the end of the paper. The results of the developed prognostics method, particularly the estimation of the RUL, can help improving the availability, reliability, and security while reducing the maintenance costs. Indeed, the RUL and associated confidence value are relevant information which can be used to take appropriate maintenance and exploitation decisions. In practice, this information may help the maintainers to prepare the necessary material and human resources before the occurrence of a failure. Thus, the traditional maintenance policies involving corrective and preventive maintenance can be replaced by condition based maintenance.",
"title": ""
},
{
"docid": "fb5a38c1dbbc7416f9b15ee19be9cc06",
"text": "This study uses a body motion interactive game developed in Scratch 2.0 to enhance the body strength of children with disabilities. Scratch 2.0, using an augmented-reality function on a program platform, creates real world and virtual reality displays at the same time. This study uses a webcam integration that tracks movements and allows participants to interact physically with the project, to enhance the motivation of children with developmental disabilities to perform physical activities. This study follows a single-case research using an ABAB structure, in which A is the baseline and B is the intervention. The experimental period was 2 months. The experimental results demonstrated that the scores for 3 children with developmental disabilities increased considerably during the intervention phrases. The developmental applications of these results are also discussed.",
"title": ""
},
{
"docid": "c313f49d5dd8b553b0638696b6d4482a",
"text": "Artificial Bee Colony Algorithm (ABC) is nature-inspired metaheuristic, which imitates the foraging behavior of bees. ABC as a stochastic technique is easy to implement, has fewer control parameters, and could easily be modify and hybridized with other metaheuristic algorithms. Due to its successful implementation, several researchers in the optimization and artificial intelligence domains have adopted it to be the main focus of their research work. Since 2005, several related works have appeared to enhance the performance of the standard ABC in the literature, to meet up with challenges of recent research problems being encountered. Interestingly, ABC has been tailored successfully, to solve a wide variety of discrete and continuous optimization problems. Some other works have modified and hybridized ABC to other algorithms, to further enhance the structure of its framework. In this review paper, we provide a thorough and extensive overview of most research work focusing on the application of ABC, with the expectation that it would serve as a reference material to both old and new, incoming researchers to the field, to support their understanding of current trends and assist their future research prospects and directions. The advantages, applications and drawbacks of the newly developed ABC hybrids are highlighted, critically analyzed and discussed accordingly.",
"title": ""
},
{
"docid": "0659c4f6cd4a6d8ab35dd7dba6c0974e",
"text": "Purpose – The purpose of this paper is to examine an integrated model of factors affecting attitudes toward online shopping in Jordan. The paper introduces an integrated model of the roles of perceived website reputation, relative advantage, perceived website image, and trust that affect attitudes toward online shopping. Design/methodology/approach – A structured and self-administered online survey was employed targeting online shoppers of a reputable online retailer in Jordan; MarkaVIP. A sample of 273 of online shoppers was involved in the online survey. A series of exploratory and confirmatory factor analyses were used to assess the research constructs, unidimensionality, validity, and composite reliability (CR). Structural path model analysis was also used to test the proposed research model and hypotheses. Findings – The empirical findings of this study indicate that perceived website reputation, relative advantage, perceived website image, and trust have directly and indirectly affected consumers’ attitudes toward online shopping. Online consumers’ shopping attitudes are mainly affected by perceived relative advantage and trust. Trust is a product of relative advantage and that the later is a function of perceived website reputation. Relative advantage and perceived website reputation are key predictors of perceived website image. Perceived website image was found to be a direct predictor of trust. Also, the authors found that 26 percent of variation in online shopping attitudes was directly caused by relative advantage, trust, and perceived website image. Research limitations/implications – The research examined online consumers’ attitudes toward one website only therefore the generalizability of the research finding is limited to the local Jordanian website; MarkaVIP. Future research is encouraged to conduct comparative studies between local websites and international ones, e.g., Amazon and e-bay in order to shed lights on consumers’ attitudes toward both websites. The findings are limited to online shoppers in Jordan. A fruitful area of research is to conduct a comparative analysis between online and offline attitudes toward online shopping behavior. Also, replications of the current study’s model in different countries would most likely strengthen and validate its findings. The design of the study is quantitative using an online survey to measure online consumers’ attitudes through a cross-sectional design. Future research is encouraged to use qualitative research design and methodology to provide a deeper understanding of consumers’ attitudes and behaviors toward online and offline shopping in Jordan and elsewhere. Practical implications – The paper supports the importance of perceived website reputation, relative advantage, trust, and perceived web image as keys drivers of attitudes toward online shopping. It further underlines the importance of relative advantage and trust as major contributors to building positive attitudes toward online shopping. In developing countries (e.g. Jordan) where individuals are generally described as risk averse, the level of trust is critical in determining the attitude of individuals toward online shopping. Moreover and given the modest economic situation in Jordan, relative advantage is another significant factor affecting consumers’ attitudes toward online shopping. Indeed, if online shopping would not add a significant value and benefits to consumers, they would have negative attitude toward this technology. This is at the heart of marketing theory and relationship marketing practice. Further, relative advantage is a key predictor of both perceived Business Process Management",
"title": ""
},
{
"docid": "96be7a58f4aec960e2ad2273dea26adb",
"text": "Because time series are a ubiquitous and increasingly prevalent type of data, there has been much research effort devoted to time series data mining recently. As with all data mining problems, the key to effective and scalable algorithms is choosing the right representation of the data. Many high level representations of time series have been proposed for data mining. In this work, we introduce a new technique based on a bit level approximation of the data. The representation has several important advantages over existing techniques. One unique advantage is that it allows raw data to be directly compared to the reduced representation, while still guaranteeing lower bounds to Euclidean distance. This fact can be exploited to produce faster exact algorithms for similarly search. In addition, we demonstrate that our new representation allows time series clustering to scale to much larger datasets.",
"title": ""
},
{
"docid": "0acf9ef6e025805a76279d1c6c6c55e7",
"text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.",
"title": ""
},
{
"docid": "cab97e23b7aa291709ecf18e29f580cf",
"text": "Recent findings show that coding genes are not the only targets that miRNAs interact with. In fact, there is a pool of different RNAs competing with each other to attract miRNAs for interactions, thus acting as competing endogenous RNAs (ceRNAs). The ceRNAs indirectly regulate each other via the titration mechanism, i.e. the increasing concentration of a ceRNA will decrease the number of miRNAs that are available for interacting with other targets. The cross-talks between ceRNAs, i.e. their interactions mediated by miRNAs, have been identified as the drivers in many disease conditions, including cancers. In recent years, some computational methods have emerged for identifying ceRNA-ceRNA interactions. However, there remain great challenges and opportunities for developing computational methods to provide new insights into ceRNA regulatory mechanisms.In this paper, we review the publically available databases of ceRNA-ceRNA interactions and the computational methods for identifying ceRNA-ceRNA interactions (also known as miRNA sponge interactions). We also conduct a comparison study of the methods with a breast cancer dataset. Our aim is to provide a current snapshot of the advances of the computational methods in identifying miRNA sponge interactions and to discuss the remaining challenges.",
"title": ""
},
{
"docid": "748926afd2efcae529a58fbfa3996884",
"text": "The purpose of this research was to investigate preservice teachers’ perceptions about using m-phones and laptops in education as mobile learning tools. A total of 1087 preservice teachers participated in the study. The results indicated that preservice teachers perceived laptops potentially stronger than m-phones as m-learning tools. In terms of limitations the situation was balanced for laptops and m-phones. Generally, the attitudes towards using laptops in education were not exceedingly positive but significantly more positive than m-phones. It was also found that such variables as program/department, grade, gender and possessing a laptop are neutral in causing a practically significant difference in preservice teachers’ views. The results imply an urgent need to grow awareness among participating student teachers towards the concept of m-learning, especially m-learning through m-phones. Introduction The world is becoming a mobigital virtual space where people can learn and teach digitally anywhere and anytime. Today, when timely access to information is vital, mobile devices such as cellular phones, smartphones, mp3 and mp4 players, iPods, digital cameras, data-travelers, personal digital assistance devices (PDAs), netbooks, laptops, tablets, iPads, e-readers such as the Kindle, Nook, etc have spread very rapidly and become common (El-Hussein & Cronje, 2010; Franklin, 2011; Kalinic, Arsovski, Stefanovic, Arsovski & Rankovic, 2011). Mobile devices are especially very popular among young population (Kalinic et al, 2011), particularly among university students (Cheon, Lee, Crooks & Song, 2012; Park, Nam & Cha, 2012). Thus, the idea of learning through mobile devices has gradually become a trend in the field of digital learning (Jeng, Wu, Huang, Tan & Yang, 2010). This is because learning with mobile devices promises “new opportunities and could improve the learning process” (Kalinic et al, 2011, p. 1345) and learning with mobile devices can help achieving educational goals if used through appropriate learning strategies (Jeng et al, 2010). As a matter of fact, from a technological point of view, mobile devices are getting more capable of performing all of the functions necessary in learning design (El-Hussein & Cronje, 2010). This and similar ideas have brought about the concept of mobile learning or m-learning. British Journal of Educational Technology Vol 45 No 4 2014 606–618 doi:10.1111/bjet.12064 © 2013 British Educational Research Association Although mobile learning applications are at their early days, there inevitably emerges a natural pressure by students on educators to integrate m-learning (Franklin, 2011) and so a great deal of attention has been drawn in these applications in the USA, Europe and Asia (Wang & Shen, 2012). Several universities including University of Glasgow, University of Sussex and University of Regensburg have been trying to explore and include the concept of m-learning in their learning systems (Kalinic et al, 2011). Yet, the success of m-learning integration requires some degree of awareness and positive attitudes by students towards m-learning. In this respect, in-service or preservice teachers’ perceptions about m-learning become more of an issue, since their attitudes are decisive in successful integration of m-learning (Cheon et al, 2012). Then it becomes critical whether the teachers, in-service or preservice, have favorable perceptions and attitudinal representations regarding m-learning. Theoretical framework M-learning M-learning has a recent history. When developed as the next phase of e-learning in early 2000s (Peng, Su, Chou & Tsai, 2009), its potential for education could not be envisaged (Attewell, 2005). However, recent developments in mobile and wireless technologies facilitated the departure from traditional learning models with time and space constraints, replacing them with Practitioner Notes What is already known about this topic • Mobile devices are very popular among young population, especially among university students. • Though it has a recent history, m-learning (ie, learning through mobile devices) has gradually become a trend. • M-learning brings new opportunities and can improve the learning process. Previous research on m-learning mostly presents positive outcomes in general besides some drawbacks. • The success of integrating m-learning in teaching practice requires some degree of awareness and positive attitudes by students towards m-learning. What this paper adds • Since teachers’ attitudes are decisive in successful integration of m-learning in teaching, the present paper attempts to understand whether preservice teachers have favorable perceptions and attitudes regarding m-learning. • Unlike much of the previous research on m-learning that handle perceptions about m-learning in a general sense, the present paper takes a more specific approach to distinguish and compare the perceptions about two most common m-learning tools: m-phones and laptops. • It also attempts to find out the variables that cause differences in preservice teachers’ perceptions about using these m-learning devices. Implications for practice and/or policy • Results imply an urgent need to grow awareness and further positive attitudes among participating student teachers towards m-learning, especially through m-phones. • Some action should be taken by the faculty and administration to pedagogically inform and raise awareness about m-learning among preservice teachers. Preservice teachers’ perceptions of M-learning tools 607 © 2013 British Educational Research Association models embedded into our everyday environment, and the paradigm of mobile learning emerged (Vavoula & Karagiannidis, 2005). Today it spreads rapidly and promises to be one of the efficient ways of education (El-Hussein & Cronje, 2010). Partly because it is a new concept, there is no common definition of m-learning in the literature yet (Peng et al, 2009). A good deal of literature defines m-learning as a derivation or extension of e-learning, which is performed using mobile devices such as PDA, mobile phones, laptops, etc (Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Riad & El-Ghareeb, 2008). Other definitions highlight certain characteristics of m-learning including portability through mobile devices, wireless Internet connection and ubiquity. For example, a common definition of m-learning in scholarly literature is “the use of portable devices with Internet connection capability in education contexts” (Kinash, Brand & Mathew, 2012, p. 639). In a similar vein, Park et al (2012, p. 592) defines m-learning as “any educational provision where the sole or dominant technologies are handheld or palmtop devices.” On the other hand, m-learning is likely to be simply defined stressing its property of ubiquity, referring to its ability to happen whenever and wherever needed (Peng et al, 2009). For example, Franklin (2011, p. 261) defines mobile learning as “learning that happens anywhere, anytime.” Though it is rather a new research topic and the effectiveness of m-learning in terms of learning achievements has not been fully investigated (Park et al, 2012), there is already an agreement that m-learning brings new opportunities and can improve the learning process (Kalinic et al, 2011). Moreover, the literature review by Wu et al (2012) notes that 86% of the 164 mobile learning studies present positive outcomes in general. Several perspectives of m-learning are attributed in the literature in association with these positive outcomes. The most outstanding among them is the feature of mobility. M-learning makes sense as an educational activity because the technology and its users are mobile (El-Hussein & Cronje, 2010). Hence, learning outside the classroom walls is possible (Nordin, Embi & Yunus, 2010; Şad, 2008; Saran, Seferoğlu & Çağıltay, 2009), enabling students to become an active participant, rather than a passive receiver of knowledge (Looi et al, 2010). This unique feature of m-learning brings about not only the possibility of learning anywhere without limits of classroom or library but also anytime (Çavuş & İbrahim, 2009; Hwang & Chang, 2011; Jeng et al, 2010; Kalinic et al, 2011; Motiwalla, 2007; Sha, Looi, Chen & Zhang, 2012; Sølvberg & Rismark, 2012). This especially offers learners a certain amount of “freedom and independence” (El-Hussein & Cronje, 2010, p. 19), as well as motivation and ability to “self-regulate their own learning” (Sha et al, 2012, p. 366). This idea of learning coincides with the principles of and meet the requirements of other popular paradigms in education including lifelong learning (Nordin et al, 2010), student-centeredness (Sha et al, 2012) and constructivism (Motiwalla, 2007). Beside the favorable properties referred in the m-learning literature, some drawbacks of m-learning are frequently criticized. The most pronounced one is the small screen sizes of the m-learning tools that makes learning activity difficult (El-Hussein & Cronje, 2010; Kalinic et al, 2011; Riad & El-Ghareeb, 2008; Suki & Suki, 2011). Another problem is the weight and limited battery lives of m-tools, particularly the laptops (Riad & El-Ghareeb, 2008). Lack of understanding or expertise with the technology also hinders nontechnical students’ active use of m-learning (Corbeil & Valdes-Corbeil, 2007; Franklin, 2011). Using mobile devices in classroom can cause distractions and interruptions (Cheon et al, 2012; Fried, 2008; Suki & Suki, 2011). Another concern seems to be about the challenged role of the teacher as the most learning activities take place outside the classroom (Sølvberg & Rismark, 2012). M-learning in higher education Mobile learning is becoming an increasingly promising way of delivering instruction in higher education (El-Hussein & Cronje, 2010). This is justified by the current statistics about the 608 British Journal of Educational Technology Vol 45 No 4 2014 © 2013 British Education",
"title": ""
},
{
"docid": "ce0004549d9eec7f47a0a60e11179bba",
"text": "We present in this paper a statistical framework that generates accurate and fluent product description from product attributes. Specifically, after extracting templates and learning writing knowledge from attribute-description parallel data, we use the learned knowledge to decide what to say and how to say for product description generation. To evaluate accuracy and fluency for the generated descriptions, in addition to BLEU and Recall, we propose to measure what to say (in terms of attribute coverage) and to measure how to say (by attribute-specified generation) separately. Experimental results show that our framework is effective.",
"title": ""
}
] |
scidocsrr
|
27c9b07f0509e9b149f818587988b009
|
Context-aware Frame-Semantic Role Labeling
|
[
{
"docid": "44582f087f9bb39d6e542ff7b600d1c7",
"text": "We propose a new deterministic approach to coreference resolution that combines the global information and precise features of modern machine-learning models with the transparency and modularity of deterministic, rule-based systems. Our sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision, where each model builds on the previous model's cluster output. The two stages of our sieve-based architecture, a mention detection stage that heavily favors recall, followed by coreference sieves that are precision-oriented, offer a powerful way to achieve both high precision and high recall. Further, our approach makes use of global information through an entity-centric model that encourages the sharing of features across all mentions that point to the same real-world entity. Despite its simplicity, our approach gives state-of-the-art performance on several corpora and genres, and has also been incorporated into hybrid state-of-the-art coreference systems for Chinese and Arabic. Our system thus offers a new paradigm for combining knowledge in rule-based systems that has implications throughout computational linguistics.",
"title": ""
}
] |
[
{
"docid": "e718e98400738013ecd050e57f5083fb",
"text": "© 2011 Jang-Hee Yoo and Mark S. Nixon 259 We present a new method for an automated markerless system to describe, analyze, and classify human gait motion. The automated system consists of three stages: i) detection and extraction of the moving human body and its contour from image sequences, ii) extraction of gait figures by the joint angles and body points, and iii) analysis of motion parameters and feature extraction for classifying human gait. A sequential set of 2D stick figures is used to represent the human gait motion, and the features based on motion parameters are determined from the sequence of extracted gait figures. Then, a knearest neighbor classifier is used to classify the gait patterns. In experiments, this provides an alternative estimate of biomechanical parameters on a large population of subjects, suggesting that the estimate of variance by marker-based techniques appeared generous. This is a very effective and well-defined representation method for analyzing the gait motion. As such, the markerless approach confirms uniqueness of the gait as earlier studies and encourages further development along these lines.",
"title": ""
},
{
"docid": "558218868956bcd05363825fb42ef75e",
"text": "Imitation learning algorithms learn viable policies by imitating an expert’s behavior when reward signals are not available. Generative Adversarial Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies when the expert’s behavior is available as a fixed set of trajectories.We evaluate in terms of the expert’s cost function and observe that the distribution of trajectory-costs is often more heavy-tailed for GAIL-agents than the expert at a number of benchmark continuous-control tasks. Thus, high-cost trajectories, corresponding to tail-end events of catastrophic failure, are more likely to be encountered by the GAIL-agents than the expert. This makes the reliability of GAIL-agents questionable when it comes to deployment in risk-sensitive applications like robotic surgery and autonomous driving. In this work, we aim to minimize the occurrence of tail-end events by minimizing tail risk within the GAIL framework. We quantify tail risk by the Conditional-Value-atRisk (CVaR) of trajectories and develop the Risk-Averse Imitation Learning (RAIL) algorithm. We observe that the policies learned with RAIL show lower tail-end risk than those of vanilla GAIL. Thus, the proposed RAIL algorithm appears as a potent alternative to GAIL for improved reliability in risk-sensitive applications.",
"title": ""
},
{
"docid": "99485ae4547e0904198c04e88db23556",
"text": "Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.",
"title": ""
},
{
"docid": "93bad64439be375200cce65a37c6b8c6",
"text": "The mobile social network (MSN) combines techniques in social science and wireless communications for mobile networking. The MSN can be considered as a system which provides a variety of data delivery services involving the social relationship among mobile users. This paper presents a comprehensive survey on the MSN specifically from the perspectives of applications, network architectures, and protocol design issues. First, major applications of the MSN are reviewed. Next, different architectures of the MSN are presented. Each of these different architectures supports different data delivery scenarios. The unique characteristics of social relationship in MSN give rise to different protocol design issues. These research issues (e.g., community detection, mobility, content distribution, content sharing protocols, and privacy) and the related approaches to address data delivery in the MSN are described. At the end, several important research directions are outlined.",
"title": ""
},
{
"docid": "9c7fbbde15c03078bce7bd8d07fa6d2a",
"text": "• For each sense sij, we create a sense embedding E(sij), again a D-dimensional vector. • The lemma embeddings can be decomposed into a mix (e.g. a convex combination) of sense vectors, for instance F(rock) = 0.3 · E(rock-1) + 0.7 · E(rock-2). The “mix variables” pij are non-negative and sum to 1 for each lemma. • The intuition of the optimization that each sense sij should be “close” to a number of other concepts, called the network neighbors, that we know are related to it, as defined by a semantic network. For instance, rock-2 might be defined by the network to be related to other types of music.",
"title": ""
},
{
"docid": "47b9d5585a0ca7d10cb0fd9da673dd0f",
"text": "A novel deep architecture, the tensor deep stacking network (T-DSN), is presented. The T-DSN consists of multiple, stacked blocks, where each block contains a bilinear mapping from two hidden layers to the output layer, using a weight tensor to incorporate higher order statistics of the hidden binary (([0,1])) features. A learning algorithm for the T-DSN's weight matrices and tensors is developed and described in which the main parameter estimation burden is shifted to a convex subproblem with a closed-form solution. Using an efficient and scalable parallel implementation for CPU clusters, we train sets of T-DSNs in three popular tasks in increasing order of the data size: handwritten digit recognition using MNIST (60k), isolated state/phone classification and continuous phone recognition using TIMIT (1.1 m), and isolated phone classification using WSJ0 (5.2 m). Experimental results in all three tasks demonstrate the effectiveness of the T-DSN and the associated learning methods in a consistent manner. In particular, a sufficient depth of the T-DSN, a symmetry in the two hidden layers structure in each T-DSN block, our model parameter learning algorithm, and a softmax layer on top of T-DSN are shown to have all contributed to the low error rates observed in the experiments for all three tasks.",
"title": ""
},
{
"docid": "d3d6a1793ce81ba0f4f0ffce0477a0ec",
"text": "Portable Document Format (PDF) is one of the widely-accepted document format. However, it becomes one of the most attractive targets for exploitation by malware developers and vulnerability researchers. Malicious PDF files can be used in Advanced Persistent Threats (APTs) targeting individuals, governments, and financial sectors. The existing tools such as intrusion detection systems (IDSs) and antivirus packages are inefficient to mitigate this kind of attacks. This is because these techniques need regular updates with the new malicious PDF files which are increasing every day. In this paper, a new algorithm is presented for detecting malicious PDF files based on data mining techniques. The proposed algorithm consists of feature selection stage and classification stage. The feature selection stage is used to the select the optimum number of features extracted from the PDF file to achieve high detection rate and low false positive rate with small computational overhead. Experimental results show that the proposed algorithm can achieve 99.77% detection rate, 99.84% accuracy, and 0.05% false positive rate.",
"title": ""
},
{
"docid": "583c6d4b7ed442cecfd1000c6c4f2a86",
"text": "Web applications are increasingly subject to mass attacks, with vulnerabilities found easily in both open source and commercial applications as evinced by the fact that approximately half of reported vulnerabilities are found in web applications. In this paper, we perform an empirical investigation of the evolution of vulnerabilities in fourteen of the most widely used open source PHP web applications, finding that vulnerabilities densities declined from 28.12 to 19.96 vulnerabilities per thousand lines of code from 2006 to 2010. We also investigate whether complexity metrics or a security resources indicator (SRI) metric can be used to identify vulnerable web application showing that average cyclomatic complexity is an effective predictor of vulnerability for several applications, especially for those with low SRI scores.",
"title": ""
},
{
"docid": "991a8c7011548af52367e426ba9beed6",
"text": "Dihydrogen, methane, and carbon dioxide isotherm measurements were performed at 1-85 bar and 77-298 K on the evacuated forms of seven porous covalent organic frameworks (COFs). The uptake behavior and capacity of the COFs is best described by classifying them into three groups based on their structural dimensions and corresponding pore sizes. Group 1 consists of 2D structures with 1D small pores (9 A for each of COF-1 and COF-6), group 2 includes 2D structures with large 1D pores (27, 16, and 32 A for COF-5, COF-8, and COF-10, respectively), and group 3 is comprised of 3D structures with 3D medium-sized pores (12 A for each of COF-102 and COF-103). Group 3 COFs outperform group 1 and 2 COFs, and rival the best metal-organic frameworks and other porous materials in their uptake capacities. This is exemplified by the excess gas uptake of COF-102 at 35 bar (72 mg g(-1) at 77 K for hydrogen, 187 mg g(-1) at 298 K for methane, and 1180 mg g(-1) at 298 K for carbon dioxide), which is similar to the performance of COF-103 but higher than those observed for COF-1, COF-5, COF-6, COF-8, and COF-10 (hydrogen at 77 K, 15 mg g(-1) for COF-1, 36 mg g(-1) for COF-5, 23 mg g(-1) for COF-6, 35 mg g(-1) for COF-8, and 39 mg g(-1) for COF-10; methane at 298 K, 40 mg g(-1) for COF-1, 89 mg g(-1) for COF-5, 65 mg g(-1) for COF-6, 87 mg g(-1) for COF-8, and 80 mg g(-1) for COF-10; carbon dioxide at 298 K, 210 mg g(-1) for COF-1, 779 mg g(-1) for COF-5, 298 mg g(-1) for COF-6, 598 mg g(-1) for COF-8, and 759 mg g(-1) for COF-10). These findings place COFs among the most porous and the best adsorbents for hydrogen, methane, and carbon dioxide.",
"title": ""
},
{
"docid": "91fbf465741c6a033a00a4aa982630b4",
"text": "This paper presents an integrated functional link interval type-2 fuzzy neural system (FLIT2FNS) for predicting the stock market indices. The hybrid model uses a TSK (Takagi–Sugano–Kang) type fuzzy rule base that employs type-2 fuzzy sets in the antecedent parts and the outputs from the Functional Link Artificial Neural Network (FLANN) in the consequent parts. Two other approaches, namely the integrated FLANN and type-1 fuzzy logic system and Local Linear Wavelet Neural Network (LLWNN) are also presented for a comparative study. Backpropagation and particle swarm optimization (PSO) learning algorithms have been used independently to optimize the parameters of all the forecasting models. To test the model performance, three well known stock market indices like the Standard’s & Poor’s 500 (S&P 500), Bombay stock exchange (BSE), and Dow Jones industrial average (DJIA) are used. The mean absolute percentage error (MAPE) and root mean square error (RMSE) are used to find out the performance of all the three models. Finally, it is observed that out of three methods, FLIT2FNS performs the best irrespective of the time horizons spanning from 1 day to 1 month. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5356a208f0f6eb4659b2a09a106bab8d",
"text": "Objective: Traditional Cognitive Training with paper-pencil tasks (PPCT) and Computer-Based Cognitive Training (C-BCT) both are effective for people with Mild Cognitive Impairment (MCI). The aim of this study is to evaluate the efficacy of a C-BCT program versus a PPCT one. Methods: One hundred and twenty four (n=124) people with amnesic & multiple domains MCI (aMCImd) diagnosis were randomly assigned in two groups, a PPCT group (n=65), and a C-BCT (n=59). The groups were matched at baseline in age, gender, education, cognitive and functional performance. Both groups attended 48 weekly 1-hour sessions of attention and executive function training for 12 months. Neuropsychological assessment was performed at baseline and 12 months later. Results: At the follow up, the PPCT group was better than the C-BCT group in visual selective attention (p≤ 0.022). The C-BCT group showed improvement in working memory (p=0.042) and in speed of switching of attention (p=0.012), while the PPCT group showed improvement in general cognitive function (p=0.005), learning ability (p=0.000), delayed verbal recall (p=0.000), visual perception (p=0.013) and visual memory (p=0.000), verbal fluency (p=0.000), visual selective attention (p=0.021), speed of switching of attention (p=0.001), visual selective attention/multiple choices (p=0.010) and Activities of Daily Living (ADL) as well (p=0.001). Conclusion: Both C-BCT and PPCT are beneficial for people with aMCImd concerning cognitive functions. However, the administration of a traditional PPCT program seems to affect a greater range of cognitive abilities and transfer the primary cognitive benefit in real life.",
"title": ""
},
{
"docid": "395f97b609acb40a8922eb4a6d398c0a",
"text": "Ambient obscurance (AO) produces perceptually important illumination effects such as darkened corners, cracks, and wrinkles; proximity darkening; and contact shadows. We present the AO algorithm from the Alchemy engine used at Vicarious Visions in commercial games. It is based on a new derivation of screen-space obscurance for robustness, and the insight that a falloff function can cancel terms in a visibility integral to favor efficient operations. Alchemy creates contact shadows that conform to surfaces, captures obscurance from geometry of varying scale, and provides four intuitive appearance parameters: world-space radius and bias, and aesthetic intensity and contrast.\n The algorithm estimates obscurance at a pixel from sample points read from depth and normal buffers. It processes dynamic scenes at HD 720p resolution in about 4.5 ms on Xbox 360 and 3 ms on NVIDIA GeForce580.",
"title": ""
},
{
"docid": "dc259f1208eac95817d067b9cd13fa7c",
"text": "This paper introduces a novel approach to texture synthesis based on generative adversarial networks (GAN) (Goodfellow et al., 2014). We extend the structure of the input noise distribution by constructing tensors with different types of dimensions. We call this technique Periodic Spatial GAN (PSGAN). The PSGAN has several novel abilities which surpass the current state of the art in texture synthesis. First, we can learn multiple textures from datasets of one or more complex large images. Second, we show that the image generation with PSGANs has properties of a texture manifold: we can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset. In addition, we can also accurately learn periodical textures. We make multiple experiments which show that PSGANs can flexibly handle diverse texture and image data sources. Our method is highly scalable and it can generate output images of arbitrary large size.",
"title": ""
},
{
"docid": "fb9bbfc3e301cb669663a12d1f18a11f",
"text": "In extensively modified landscapes, how the matrix is managed determines many conservation outcomes. Recent publications revise popular conceptions of a homogeneous and static matrix, yet we still lack an adequate conceptual model of the matrix. Here, we identify three core effects that influence patch-dependent species, through impacts associated with movement and dispersal, resource availability, and the abiotic environment. These core effects are modified by five 'dimensions': spatial and temporal variation in matrix quality; spatial scale; temporal scale of matrix variation; and adaptation. The conceptual domain of the matrix, defined as three core effects and their interaction with these five dimensions, provides a much-needed framework to underpin management of fragmented landscapes and highlights new research priorities.",
"title": ""
},
{
"docid": "a9b769e33467cdcc86ab47b5183e5a5b",
"text": "The focus of this study is to examine the motivations of online community members to share information and rumors. We investigated an online community of interest, the members of which voluntarily associate and communicate with people with similar interests. Community members, posters and lurkers alike, were surveyed on the influence of extrinsic and intrinsic motivations, as well as normative influences, on their willingness to share information and rumors with others. The results indicated that posters and lurkers are differently motivated by intrinsic factors to share, and that extrinsic rewards like improved reputation and status-building within the community are motivating factors for rumor mongering. The results are discussed and future directions for this area of research are offered.",
"title": ""
},
{
"docid": "b181715b75842987e5f30ccd5765e378",
"text": "Klondike Solitaire – also known as Patience – is a well-known single player card game. We studied several classes of Klondike Solitaire game configurations. We present a dynamic programming solution for counting the number of “unplayable” games. This method is extended for a subset of games which allow exactly one move. With an algorithm based on the inclusion-exclusion principle, symmetry elimination and a trade-off between lookup tables and dynamic programming we count the number of games that cannot be won due to a specific type of conflict. The size of a larger class of conflicting configurations is approximated with a Monte Carlo simulation. We investigate how much gameplay is limited by the stock. We give a recursion and show that Pfaff-Fuss-Catalan is a lower bound. We consider trivial games and report on two remarkable patterns we discovered.",
"title": ""
},
{
"docid": "b9a5cedbec1b6cd5091fb617c0513a13",
"text": "The cerebellum undergoes a protracted development, making it particularly vulnerable to a broad spectrum of developmental events. Acquired destructive and hemorrhagic insults may also occur. The main steps of cerebellar development are reviewed. The normal imaging patterns of the cerebellum in prenatal ultrasound and magnetic resonance imaging (MRI) are described with emphasis on the limitations of these modalities. Because of confusion in the literature regarding the terminology used for cerebellar malformations, some terms (agenesis, hypoplasia, dysplasia, and atrophy) are clarified. Three main pathologic settings are considered and the main diagnoses that can be suggested are described: retrocerebellar fluid enlargement with normal or abnormal biometry (Dandy-Walker malformation, Blake pouch cyst, vermian agenesis), partially or globally decreased cerebellar biometry (cerebellar hypoplasia, agenesis, rhombencephalosynapsis, ischemic and/or hemorrhagic damage), partially or globally abnormal cerebellar echogenicity (ischemic and/or hemorrhagic damage, cerebellar dysplasia, capillary telangiectasia). The appropriate timing for performing MRI is also discussed.",
"title": ""
},
{
"docid": "17f7360d6eda0ddddbf27c6de21a3746",
"text": "Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second, are there individual-specific gaze strategies that strongly correlate with how much gaze classification improves with the addition of eye pose information? We answer these questions by evaluating data drawn from an on-road study of 40 drivers. The main insight of the paper is conveyed through the analogy of an “owl” and “lizard” which describes the degree to which the eyes and the head move when shifting gaze. When the head moves a lot (“owl”), not much classification improvement is attained by estimating eye pose on top of head pose. On the other hand, when the head stays still and only the eyes move (“lizard”), classification accuracy increases significantly from adding in eye pose. We characterize how that accuracy varies between people, gaze strategies, and gaze regions.",
"title": ""
},
{
"docid": "8b5bf5c5717d77c7a8b836758e9cd37e",
"text": "Purpose – Due to the size and velocity at which user generated content is created on social media services such as Twitter, analysts are often limited by the need to pre-determine the specific topics and themes they wish to follow. Visual analytics software may be used to support the interactive discovery of emergent themes. The paper aims to discuss these issues. Design/methodology/approach – Tweets collected from the live Twitter stream matching a user’s query are stored in a database, and classified based on their sentiment. The temporally changing sentiment is visualized, along with sparklines showing the distribution of the top terms, hashtags, user mentions, and authors in each of the positive, neutral, and negative classes. Interactive tools are provided to support sub-querying and the examination of emergent themes. Findings – A case study of using Vista to analyze sport fan engagement within a mega-sport event (2013 Le Tour de France) is provided. The authors illustrate how emergent themes can be identified and isolated from the large collection of data, without the need to identify these a priori. Originality/value – Vista provides mechanisms that support the interactive exploration among Twitter data. By combining automatic data processing and machine learning methods with interactive visualization software, researchers are relieved of tedious data processing tasks, and can focus on the analysis of high-level features of the data. In particular, patterns of Twitter use can be identified, emergent themes can be isolated, and purposeful samples of the data can be selected by the researcher for further analysis.",
"title": ""
},
{
"docid": "4ca5fec568185d3699c711cc86104854",
"text": "Attackers often create systems that automatically rewrite and reorder their malware to avoid detection. Typical machine learning approaches, which learn a classifier based on a handcrafted feature vector, are not sufficiently robust to such reorderings. We propose a different approach, which, similar to natural language modeling, learns the language of malware spoken through the executed instructions and extracts robust, time domain features. Echo state networks (ESNs) and recurrent neural networks (RNNs) are used for the projection stage that extracts the features. These models are trained in an unsupervised fashion. A standard classifier uses these features to detect malicious files. We explore a few variants of ESNs and RNNs for the projection stage, including Max-Pooling and Half-Frame models which we propose. The best performing hybrid model uses an ESN for the recurrent model, Max-Pooling for non-linear sampling, and logistic regression for the final classification. Compared to the standard trigram of events model, it improves the true positive rate by 98.3% at a false positive rate of 0.1%.",
"title": ""
}
] |
scidocsrr
|
73ecd876e133b841d730791874b3f323
|
A Wearable Reflectance Pulse Oximeter for Remote Physiological Monitoring
|
[
{
"docid": "3a3a2261e1063770a9ccbd0d594aa561",
"text": "This paper describes an advanced care and alert portable telemedical monitor (AMON), a wearable medical monitoring and alert system targeting high-risk cardiac/respiratory patients. The system includes continuous collection and evaluation of multiple vital signs, intelligent multiparameter medical emergency detection, and a cellular connection to a medical center. By integrating the whole system in an unobtrusive, wrist-worn enclosure and applying aggressive low-power design techniques, continuous long-term monitoring can be performed without interfering with the patients' everyday activities and without restricting their mobility. In the first two and a half years of this EU IST sponsored project, the AMON consortium has designed, implemented, and tested the described wrist-worn device, a communication link, and a comprehensive medical center software package. The performance of the system has been validated by a medical study with a set of 33 subjects. The paper describes the main concepts behind the AMON system and presents details of the individual subsystems and solutions as well as the results of the medical validation.",
"title": ""
}
] |
[
{
"docid": "e63e272f3ca07e1e7e90e53f6008e675",
"text": "Energy management in microgrids is typically formulated as an offline optimization problem for day-ahead scheduling by previous studies. Most of these offline approaches assume perfect forecasting of the renewables, the demands, and the market, which is difficult to achieve in practice. Existing online algorithms, on the other hand, oversimplify the microgrid model by only considering the aggregate supply-demand balance while omitting the underlying power distribution network and the associated power flow and system operational constraints. Consequently, such approaches may result in control decisions that violate the real-world constraints. This paper focuses on developing an online energy management strategy (EMS) for real-time operation of microgrids that takes into account the power flow and system operational constraints on a distribution network. We model the online energy management as a stochastic optimal power flow problem and propose an online EMS based on Lyapunov optimization. The proposed online EMS is subsequently applied to a real-microgrid system. The simulation results demonstrate that the performance of the proposed EMS exceeds a greedy algorithm and is close to an optimal offline algorithm. Lastly, the effect of the underlying network structure on energy management is observed and analyzed.",
"title": ""
},
{
"docid": "03abab0bc882ada2c7ba4d512ac98d0e",
"text": "The main goal of this project is to use the solar or AC power to charge all kind of regulated and unregulated battery like electric vehicle’s battery. Besides that, it will charge Lithium-ion (Li-ion) batteries of different voltage level. A standard pulse width modulation (PWM) which is controlled by duty cycle is used to build the solar or AC fed battery charger. A microcontroller unit and Buck/Boost converters are also used to build the charger. This charger changes the output voltages from variable input voltages with fixed amplitude in PWM. It gives regulated voltages for charging sensitive batteries. An unregulated output voltage can be obtained for electric vehicle’s battery. The battery charger is tested and the obtained result allowed to conclude the conditions of permanent control on the battery charger.",
"title": ""
},
{
"docid": "d175a51376883c1b563633d67dde6b8c",
"text": "Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes (e.g., AMR, UCCA, GMB, UDS) have been put forth. Yet, little has been done to assess the achievements and the shortcomings of these new contenders, compare them with syntactic schemes, and clarify the general goals of research on semantic representation. We address these gaps by critically surveying the state of the art in the field.1",
"title": ""
},
{
"docid": "8e180c13b925188f1925fee03c641669",
"text": "“Web applications have become increasingly complex and highly vulnerable,” says Peter Wood, member of the ISACA Security Advisory Group and CEO of First Base Technologies. “Social networking sites, consumer technologies – smartphones, tablets etc – and cloud services are all game changers this year. More enterprises are now requesting social engineering tests, which shows an increased awareness of threats beyond website attacks.”",
"title": ""
},
{
"docid": "1004cd19681bbebfabf51396c6b78e34",
"text": "OBJECTIVE\nThe objectives of this study were to develop a coronary heart disease (CHD) risk model among the Korean Heart Study (KHS) population and compare it with the Framingham CHD risk score.\n\n\nDESIGN\nA prospective cohort study within a national insurance system.\n\n\nSETTING\n18 health promotion centres nationwide between 1996 and 2001 in Korea.\n\n\nPARTICIPANTS\n268 315 Koreans between the ages of 30 and 74 years without CHD at baseline.\n\n\nOUTCOME MEASURE\nNon-fatal or fatal CHD events between 1997 and 2011. During an 11.6-year median follow-up, 2596 CHD events (1903 non-fatal and 693 fatal) occurred in the cohort. The optimal CHD model was created by adding high-density lipoprotein (HDL)-cholesterol, low-density lipoprotein (LDL)-cholesterol and triglycerides to the basic CHD model, evaluating using the area under the receiver operating characteristic curve (ROC) and continuous net reclassification index (NRI).\n\n\nRESULTS\nThe optimal CHD models for men and women included HDL-cholesterol (NRI=0.284) and triglycerides (NRI=0.207) from the basic CHD model, respectively. The discrimination using the CHD model in the Korean cohort was high: the areas under ROC were 0.764 (95% CI 0.752 to 0.774) for men and 0.815 (95% CI 0.795 to 0.835) for women. The Framingham risk function predicted 3-6 times as many CHD events than observed. Recalibration of the Framingham function using the mean values of risk factors and mean CHD incidence rates of the KHS cohort substantially improved the performance of the Framingham functions in the KHS cohort.\n\n\nCONCLUSIONS\nThe present study provides the first evidence that the Framingham risk function overestimates the risk of CHD in the Korean population where CHD incidence is low. The Korean CHD risk model is well-calculated alternations which can be used to predict an individual's risk of CHD and provides a useful guide to identify the groups at high risk for CHD among Koreans.",
"title": ""
},
{
"docid": "f2a2f1e8548cc6fcff6f1d565dfa26c9",
"text": "Cabbage contains the glucosinolate sinigrin, which is hydrolyzed by myrosinase to allyl isothiocyanate. Isothiocyanates are thought to inhibit the development of cancer cells by a number of mechanisms. The effect of cooking cabbage on isothiocyanate production from glucosinolates during and after their ingestion was examined in human subjects. Each of 12 healthy human volunteers consumed three meals, at 48-h intervals, containing either raw cabbage, cooked cabbage, or mustard according to a cross-over design. At each meal, watercress juice, which is rich in phenethyl isothiocyanate, was also consumed to allow individual and temporal variation in postabsorptive isothiocyanate recovery to be measured. Volunteers recorded the time and volume of each urination for 24 h after each meal. Samples of each urination were analyzed for N-acetyl cysteine conjugates of isothiocyanates as a measure of entry of isothiocyanates into the peripheral circulation. Excretion of isothiocyanates was rapid and substantial after ingestion of mustard, a source of preformed allyl isothiocyanate. After raw cabbage consumption, allyl isothiocyanate was again rapidly excreted, although to a lesser extent than when mustard was consumed. On the cooked cabbage treatment, excretion of allyl isothiocyanate was considerably less than for raw cabbage, and the excretion was delayed. The results indicate that isothiocyanate production is more extensive after consumption of raw vegetables but that isothiocyanates still arise, albeit to a lesser degree, when cooked vegetables are consumed. The lag in excretion on the cooked cabbage treatment suggests that the colon microflora catalyze glucosinolate hydrolysis in this case.",
"title": ""
},
{
"docid": "b8d41b4b440641d769f58189db8eaf91",
"text": "Differential diagnosis of trichotillomania is often difficult in clinical practice. Trichoscopy (hair and scalp dermoscopy) effectively supports differential diagnosis of various hair and scalp diseases. The aim of this study was to assess the usefulness of trichoscopy in diagnosing trichotillomania. The study included 370 patients (44 with trichotillomania, 314 with alopecia areata and 12 with tinea capitis). Statistical analysis revealed that the main and most characteristic trichoscopic findings of trichotillomania are: irregularly broken hairs (44/44; 100% of patients), v-sign (24/44; 57%), flame hairs (11/44; 25%), hair powder (7/44; 16%) and coiled hairs (17/44; 39%). Flame hairs, v-sign, tulip hairs, and hair powder were newly identified in this study. In conclusion, we describe here specific trichoscopy features, which may be applied in quick, non-invasive, in-office differential diagnosis of trichotillomania.",
"title": ""
},
{
"docid": "a0d6536cd8c85fe87cb316f92b489d32",
"text": "As a design of information-centric network architecture, Named Data Networking (NDN) provides content-based security. The signature binding the name with the content is the key point of content-based security in NDN. However, signing a content will introduce a significant computation overhead, especially for dynamically generated content. Adversaries can take advantages of such computation overhead to deplete the resources of the content provider. In this paper, we propose Interest Cash, an application-based countermeasure against Interest Flooding for dynamic content. Interest Cash requires a content consumer to solve a puzzle before it sends an Interest. The content consumer should provide a solution to this puzzle as cash to get the signing service from the content provider. The experiment shows that an adversary has to use more than 300 times computation resources of the content provider to commit a successful attack when Interest Cash is used.",
"title": ""
},
{
"docid": "dd15c51d3f5f25d43169c927ac753013",
"text": "After completing this article, readers should be able to: 1. List the risk factors for severe hyperbilirubinemia. 2. Distinguish between physiologic jaundice and pathologic jaundice of the newborn. 3. Recognize the clinical manifestations of acute bilirubin encephalopathy and the permanent clinical sequelae of kernicterus.4. Describe the evaluation of hyperbilirubinemia from birth through 3 months of age. 5. Manage neonatal hyperbilirubinemia, including referral to the neonatal intensive care unit for exchange transfusion.",
"title": ""
},
{
"docid": "1203822bf82dcd890e7a7a60fb282ce5",
"text": "Individuals with psychosocial problems such as social phobia or feelings of loneliness might be vulnerable to excessive use of cyber-technological devices, such as smartphones. We aimed to determine the relationship of smartphone addiction with social phobia and loneliness in a sample of university students in Istanbul, Turkey. Three hundred and sixty-seven students who owned smartphones were given the Smartphone Addiction Scale (SAS), UCLA Loneliness Scale (UCLA-LS), and Brief Social Phobia Scale (BSPS). A significant difference was found in the mean SAS scores (p < .001) between users who declared that their main purpose for smartphone use was to access social networking sites. The BSPS scores showed positive correlations with all six subscales and with the total SAS scores. The total UCLA-LS scores were positively correlated with daily life disturbance, positive anticipation, cyber-oriented relationship, and total scores on the SAS. In regression analyses, total BSPS scores were significant predictors for SAS total scores (β = 0.313, t = 5.992, p < .001). In addition, BSPS scores were significant predictors for all six SAS subscales, whereas UCLA-LS scores were significant predictors for only cyber-oriented relationship subscale scores on the SAS (β = 0.130, t = 2.416, p < .05). The results of this study indicate that social phobia was associated with the risk for smartphone addiction in young people. Younger individuals who primarily use their smartphones to access social networking sites also have an excessive pattern of smartphone use. ARTICLE HISTORY Received 12 January 2016 Accepted 19 February 2016",
"title": ""
},
{
"docid": "fcd9a80d35a24c7222392c11d3376c72",
"text": "A dual-band coplanar waveguide (CPW)-fed hybrid antenna consisting of a 5.4 GHz high-band CPW-fed inductive slot antenna and a 2.4 GHz low-band bifurcated F-shaped monopole antenna is proposed and investigated experimentally. This antenna possesses an appealing characteristic that the CPW-fed inductive slot antenna reinforces and thus improves the radiation efficiency of the bifurcated monopole antenna. Moreover, due to field orthogonality, one band resonant frequency and return loss bandwidth of the proposed hybrid antenna allows almost independent optimization without noticeably affecting those of the other band.",
"title": ""
},
{
"docid": "b3874f8390e284c119635e7619e7d952",
"text": "Since a vehicle logo is the clearest indicator of a vehicle manufacturer, most vehicle manufacturer recognition (VMR) methods are based on vehicle logo recognition. Logo recognition can be still a challenge due to difficulties in precisely segmenting the vehicle logo in an image and the requirement for robustness against various imaging situations simultaneously. In this paper, a convolutional neural network (CNN) system has been proposed for VMR that removes the requirement for precise logo detection and segmentation. In addition, an efficient pretraining strategy has been introduced to reduce the high computational cost of kernel training in CNN-based systems to enable improved real-world applications. A data set containing 11 500 logo images belonging to 10 manufacturers, with 10 000 for training and 1500 for testing, is generated and employed to assess the suitability of the proposed system. An average accuracy of 99.07% is obtained, demonstrating the high classification potential and robustness against various poor imaging situations.",
"title": ""
},
{
"docid": "56c7c065c390d1ed5f454f663289788d",
"text": "This paper presents a novel approach to character identification, that is an entity linking task that maps mentions to characters in dialogues from TV show transcripts. We first augment and correct several cases of annotation errors in an existing corpus so the corpus is clearer and cleaner for statistical learning. We also introduce the agglomerative convolutional neural network that takes groups of features and learns mention and mention-pair embeddings for coreference resolution. We then propose another neural model that employs the embeddings learned and creates cluster embeddings for entity linking. Our coreference resolution model shows comparable results to other state-of-the-art systems. Our entity linking model significantly outperforms the previous work, showing the F1 score of 86.76% and the accuracy of 95.30% for character identification.",
"title": ""
},
{
"docid": "265421a07efc8ab26a6766f90bf53245",
"text": "Recently, there has been much excitement in the research community over using social networks to mitigate multiple identity, or Sybil, attacks. A number of schemes have been proposed, but they differ greatly in the algorithms they use and in the networks upon which they are evaluated. As a result, the research community lacks a clear understanding of how these schemes compare against each other, how well they would work on real-world social networks with different structural properties, or whether there exist other (potentially better) ways of Sybil defense.\n In this paper, we show that, despite their considerable differences, existing Sybil defense schemes work by detecting local communities (i.e., clusters of nodes more tightly knit than the rest of the graph) around a trusted node. Our finding has important implications for both existing and future designs of Sybil defense schemes. First, we show that there is an opportunity to leverage the substantial amount of prior work on general community detection algorithms in order to defend against Sybils. Second, our analysis reveals the fundamental limits of current social network-based Sybil defenses: We demonstrate that networks with well-defined community structure are inherently more vulnerable to Sybil attacks, and that, in such networks, Sybils can carefully target their links in order make their attacks more effective.",
"title": ""
},
{
"docid": "4ad106897a19830c80a40e059428f039",
"text": "In 1972, and later in 1979, at the peak of the golden era of Good Old Fashioned Artificial Intelligence (GOFAI), the voice of philosopher Hubert Dreyfus made itself heard as one of the few calls against the hubristic programme of modelling the human mind as a mechanism of symbolic information processing (Dreyfus, 1979). He did not criticise particular solutions to specific problems; instead his deep concern was with the very foundations of the programme. His critical stance was unusual, at least for most GOFAI practitioners, in that it did not rely on technical issues, but on a philosophical position emanating from phenomenology and existentialism, a fact contributing to his claims being largely ignored or dismissed for a long time by the AI community. But, for the most part, he was eventually proven right. AI’s over-reliance on worldmodelling and planning went against the evidence provided by phenomenology of human activity as situated and with a clear and ever-present focus of practical concern – the body and not some algorithm is the originating locus of intelligent activity (if by intelligent we understand intentional, directed and flexible), and the world is not the sum total of all available facts, but the world-as-it-is-for-this-body. Such concerns were later vindicated by the Brooksian revolution in autonomous robotics with its foundations on embodiment, situatedness and de-centralised mechanisms (Brooks, 1991). Brooks’ practical and methodological preoccupations – building robots largely based on biologically plausible principles and capable of acting in the real world – proved parallel, despite his claim that his approach was not “German philosophy”, to issues raised by Dreyfus. Putting robotics back as the acid test of AI, as oppossed to playing chess and proving theorems, is now often seen as a positive response to Dreyfus’ point that AI was unable to capture true meaning by the summing of meaningless processes. This criticism was later devastatingly recast in Searle’s Chinese Room argument (1980), and extended by Harnad’s Symbol Grounding Problem (1990). Meaningful activity – that is, meaningful for the agent and not only for the designer – must obtain through sensorimotor grounding in the agent’s world, and for this both a body and world are needed. Following these developments, work in autonomous robotics and new AI since the 1990s rebelled against pure connectionism because of its lack of biological plausibility and also because most of connectionist research was carried out in vacuo – it was compellingly argued that neural network models as simple input/output processing units are meaningless for modelling the cognitive capabilities of insects, let alone humans, unless they are embedded in a closed sensorimotor loop of interaction with a world (Cliff, 1991). Objective meaning, that is meaningful internal states and states of the world, can only obtain in an embodied agent whose effector and sensor activities become coordinated",
"title": ""
},
{
"docid": "87e732240f00b112bf2bb44af0ff8ca1",
"text": "Spoken Dialogue Systems (SDS) are man-machine interfaces which use natural language as the medium of interaction. Dialogue corpora collection for the purpose of training and evaluating dialogue systems is an expensive process. User simulators aim at simulating human users in order to generate synthetic data. Existing methods for user simulation mainly focus on generating data with the same statistical consistency as in some reference dialogue corpus. This paper outlines a novel approach for user simulation based on Inverse Reinforcement Learning (IRL). The task of building the user simulator is perceived as a task of imitation learning.",
"title": ""
},
{
"docid": "0cf7ebc02a8396a615064892d9ee6f22",
"text": "With the wider use of ontologies in the Semantic Web and as part of production systems, multiple scenarios for ontology maintenance and evolution are emerging. For example, successive ontology versions can be posted on the (Semantic) Web, with users discovering the new versions serendipitously; ontology-development in a collaborative environment can be synchronous or asynchronous; managers of projects may exercise quality control, examining changes from previous baseline versions and accepting or rejecting them before a new baseline is published, and so on. In this paper, we present different scenarios for ontology maintenance and evolution that we have encountered in our own projects and in those of our collaborators. We define several features that categorize these scenarios. For each scenario, we discuss the high-level tasks that an editing environment must support. We then present a unified comprehensive set of tools to support different scenarios in a single framework, allowing users to switch between different modes easily. 1 Evolution of Ontology Evolution Acceptance of ontologies as an integral part of knowledge-intensive applications has been growing steadily. The word ontology became a recognized substrate in fields outside the computer science, from bioinformatics to intelligence analysis. With such acceptance, came the use of ontologies in industrial systems and active publishing of ontologies on the (Semantic) Web. More and more often, developing an ontology is not a project undertaken by a single person or a small group of people in a research laboratory, but rather it is a large project with numerous participants, who are often geographically distributed, where the resulting ontologies are used in production environments with paying customers counting on robustness and reliability of the system. The Protégé ontology-development environment1 has become a widely used tool for developing ontologies, with more than 50,000 registered users. The Protégé group works closely with some of the tool’s users and we have a continuous stream of requests from them on the features that they would like to have supported in terms of managing and developing ontologies collaboratively. The configurations for collaborative development differ significantly however. For instance, Perot Systems2 uses a client–server mode of Protégé with multiple users simultaneously accessing the same copy of the ontology on the server. The NCI Center for Bioinformatics, which develops the NCI The1 http://protege.stanford.edu 2 http://www.perotsystems.com saurus3 has a different configuration: a baseline version of the Thesaurus is published regularly and between the baselines, multiple editors work asynchronously on their own versions. At the end of the cycle, the changes are reconciled. In the OBO project,4 ontology developers post their ontologies on a sourceforge site, using the sourceforge version-control system to publish successive versions. In addition to specific requirements to support each of these collaboration models, users universally request the ability to annotate their changes, to hold discussions about the changes, to see the change history with respective annotations, and so on. When developing tool support for all the different modes and tasks in the process of ontology evolution, we started with separate and unrelated sets of Protégé plugins that supported each of the collaborative editing modes. This approach, however, was difficult to maintain; besides, we saw that tools developed for one mode (such as change annotation) will be useful in other modes. Therefore, we have developed a single unified framework that is flexible enough to work in either synchronous or asynchronous mode, in those environments where Protégé and our plugins are used to track changes and in those environments where there is no record of the change steps. At the center of the system is a Change and Annotation Ontology (CHAO) with instances recording specific changes and meta-information about them (author, timestamp, annotations, acceptance status, etc.). When Protégé and its change-management plugins are used for ontology editing, these tools create CHAO instances as a side product of the editing process. Otherwise, the CHAO instances are created from a structural diff produced by comparing two versions. The CHAO instances then drive the user interface that displays changes between versions to a user, allows him to accept and reject changes, to view concept history, to generate a new baseline, to publish a history of changes that other applications can use, and so on. This paper makes the following contributions: – analysis and categorization of different scenarios for ontology maintenance and evolution and their functional requirements (Section 2) – development of a comprehensive solution that addresses most of the functional requirements from the different scenarios in a single unified framework (Section 3) – implementation of the solution as a set of open-source Protégé plugins (Section 4) 2 Ontology-Evolution Scenarios and Tasks We will now discuss different scenarios for ontology maintenance and evolution, their attributes, and functional requirements.",
"title": ""
},
{
"docid": "6fa90d1212c53f4bf5da7c49c63a4248",
"text": "Social coding paradigm is reshaping the distributed software development with a surprising speed in recent years. Github, a remarkable social coding community, attracts a huge number of developers in a short time. Various kinds of social networks are formed based on social activities among developers. Why this new paradigm can achieve such a great success in attracting external developers, and how they are connected in such a massive community, are interesting questions for revealing power of social coding paradigm. In this paper, we firstly compare the growth curves of project and user in GitHub with three traditional open source software communities to explore differences of their growth modes. We find an explosive growth of the users in GitHub and introduce the Diffusion of Innovation theory to illustrate intrinsic sociological basis of this phenomenon. Secondly, we construct follow-networks according to the follow behaviors among developers in GitHub. Finally, we present four typical social behavior patterns by mining follow-networks containing independence-pattern, group-pattern, star-pattern and hub-pattern. This study can provide several instructions of crowd collaboration to newcomers. According to the typical behavior patterns, the community manager could design corresponding assistive tools for developers.",
"title": ""
},
{
"docid": "5c129341d3b250dcbd5732a61ae28d53",
"text": "Circadian rhythms govern a remarkable variety of metabolic and physiological functions. Accumulating epidemiological and genetic evidence indicates that the disruption of circadian rhythms might be directly linked to cancer. Intriguingly, several molecular gears constituting the clock machinery have been found to establish functional interplays with regulators of the cell cycle, and alterations in clock function could lead to aberrant cellular proliferation. In addition, connections between the circadian clock and cellular metabolism have been identified that are regulated by chromatin remodelling. This suggests that abnormal metabolism in cancer could also be a consequence of a disrupted circadian clock. Therefore, a comprehensive understanding of the molecular links that connect the circadian clock to the cell cycle and metabolism could provide therapeutic benefit against certain human neoplasias.",
"title": ""
},
{
"docid": "a0b862a758c659b62da2114143bf7687",
"text": "The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of over-sampling techniques that remedies this situation. It generates minority instances within the overlapping regions. However, SMOTE randomly synthesizes the minority instances along a line joining a minority instance and its selected nearest neighbours, ignoring nearby majority instances. Our technique called SafeLevel-SMOTE carefully samples minority instances along the same line with different weight degree, called safe level. The safe level computes by using nearest neighbour minority instances. By synthesizing the minority instances more around larger safe level, we achieve a better accuracy performance than SMOTE and Borderline-SMOTE.",
"title": ""
}
] |
scidocsrr
|
5e737b16de0ad8d1b04abe746ac3d658
|
A 22nm ±0.95V CMOS OTA-C front-end with 50/60 Hz notch for biomedical signal acquisition
|
[
{
"docid": "73aa720bebc5f2fa1930930fb4185490",
"text": "A CMOS OTA-C notch filter for 50Hz interference was presented in this paper. The OTAs were working in weak inversion region in order to achieve ultra low transconductance and power consumptions. The circuits were designed using SMIC mixed-signal 0.18nm 1P6M process. The post-annotated simulation indicated that an attenuation of 47.2dB for power line interference and a 120pW consumption. The design achieved a dynamic range of 75.8dB and a THD of 0.1%, whilst the input signal was a 1 Hz 20mVpp sine wave.",
"title": ""
}
] |
[
{
"docid": "ef62b0e14f835a36c3157c1ae0f858e5",
"text": "Algorithms based on Convolutional Neural Network (CNN) have recently been applied to object detection applications, greatly improving their performance. However, many devices intended for these algorithms have limited computation resources and strict power consumption constraints, and are not suitable for algorithms designed for GPU workstations. This paper presents a novel method to optimise CNN-based object detection algorithms targeting embedded FPGA platforms. Given parameterised CNN hardware modules, an optimisation flow takes network architectures and resource constraints as input, and tunes hardware parameters with algorithm-specific information to explore the design space and achieve high performance. The evaluation shows that our design model accuracy is above 85% and, with optimised configuration, our design can achieve 49.6 times speed-up compared with software implementation.",
"title": ""
},
{
"docid": "9a46e35fae0b3b7bdbb935b20ca9516b",
"text": "Though quite challenging, leveraging large-scale unlabeled or partially labeled data in learning systems (e.g., model/classifier training) has attracted increasing attentions due to its fundamental importance. To address this problem, many active learning (AL) methods have been proposed that employ up-to-date detectors to retrieve representative minority samples according to predefined confidence or uncertainty thresholds. However, these AL methods cause the detectors to ignore the remaining majority samples (i.e., those with low uncertainty or high prediction confidence). In this paper, by developing a principled active sample mining (ASM) framework, we demonstrate that cost-effective mining samples from these unlabeled majority data are a key to train more powerful object detectors while minimizing user effort. Specifically, our ASM framework involves a switchable sample selection mechanism for determining whether an unlabeled sample should be manually annotated via AL or automatically pseudolabeled via a novel self-learning process. The proposed process can be compatible with mini-batch-based training (i.e., using a batch of unlabeled or partially labeled data as a one-time input) for object detection. In this process, the detector, such as a deep neural network, is first applied to the unlabeled samples (i.e., object proposals) to estimate their labels and output the corresponding prediction confidences. Then, our ASM framework is used to select a number of samples and assign pseudolabels to them. These labels are specific to each learning batch based on the confidence levels and additional constraints introduced by the AL process and will be discarded afterward. Then, these temporarily labeled samples are employed for network fine-tuning. In addition, a few samples with low-confidence predictions are selected and annotated via AL. Notably, our method is suitable for object categories that are not seen in the unlabeled data during the learning process. Extensive experiments on two public benchmarks (i.e., the PASCAL VOC 2007/2012 data sets) clearly demonstrate that our ASM framework can achieve performance comparable to that of the alternative methods but with significantly fewer annotations.",
"title": ""
},
{
"docid": "4177fc3fa7c5abe25e4e144e6c079c1f",
"text": "A wideband noise-cancelling low-noise amplifier (LNA) without the use of inductors is designed for low-voltage and low-power applications. Based on the common-gate-common-source (CG-CS) topology, a new approach employing local negative feedback is introduced between the parallel CG and CS stages. The moderate gain at the source of the cascode transistor in the CS stage is utilized to boost the transconductance of the CG transistor. This leads to an LNA with higher gain and lower noise figure (NF) compared with the conventional CG-CS LNA, particularly under low power and voltage constraints. By adjusting the local open-loop gain, the NF can be optimized by distributing the power consumption among transistors and resistors based on their contribution to the NF. The optimal value of the local open-loop gain can be obtained by taking into account the effect of phase shift at high frequency. The linearity is improved by employing two types of distortion-cancelling techniques. Fabricated in a 0.13-μm RF CMOS process, the LNA achieves a voltage gain of 19 dB and an NF of 2.8-3.4 dB over a 3-dB bandwidth of 0.2-3.8 GHz. It consumes 5.7 mA from a 1-V supply and occupies an active area of only 0.025 mm2.",
"title": ""
},
{
"docid": "10b8aa3bc47a05d2e0eddc83f6922005",
"text": "Bluetooth Low Energy (BLE), a low-power wireless protocol, is widely used in industrial automation for monitoring field devices. Although the BLE standard defines advanced security mechanisms, there are known security attacks for BLE and BLE-enabled field devices must be tested thoroughly against these attacks. This article identifies the possible attacks for BLE-enabled field devices relevant for industrial automation. It also presents a framework for defining and executing BLE security attacks and evaluates it on three BLE devices. All tested devices are vulnerable and this confirms that there is a need for better security testing tools as well as for additional defense mechanisms for BLE devices.",
"title": ""
},
{
"docid": "0737e99613b83104bc9390a46fbc4aeb",
"text": "Natural language text exhibits hierarchical structure in a variety of respects. Ideally, we could incorporate our prior knowledge of this hierarchical structure into unsupervised learning algorithms that work on text data. Recent work by Nickel and Kiela (2017) proposed using hyperbolic instead of Euclidean embedding spaces to represent hierarchical data and demonstrated encouraging results when embedding graphs. In this work, we extend their method with a re-parameterization technique that allows us to learn hyperbolic embeddings of arbitrarily parameterized objects. We apply this framework to learn word and sentence embeddings in hyperbolic space in an unsupervised manner from text corpora. The resulting embeddings seem to encode certain intuitive notions of hierarchy, such as wordcontext frequency and phrase constituency. However, the implicit continuous hierarchy in the learned hyperbolic space makes interrogating the model’s learned hierarchies more difficult than for models that learn explicit edges between items. The learned hyperbolic embeddings show improvements over Euclidean embeddings in some – but not all – downstream tasks, suggesting that hierarchical organization is more useful for some tasks than others.",
"title": ""
},
{
"docid": "3d0a6b490a80e79690157a9ed690fdcc",
"text": "In this paper we introduce a novel Depth-Aware Video Saliency approach to predict human focus of attention when viewing videos that contain a depth map (RGBD) on a 2D screen. Saliency estimation in this scenario is highly important since in the near future 3D video content will be easily acquired yet hard to display. Despite considerable progress in 3D display technologies, most are still expensive and require special glasses for viewing, so RGBD content is primarily viewed on 2D screens, removing the depth channel from the final viewing experience. We train a generative convolutional neural network that predicts the 2D viewing saliency map for a given frame using the RGBD pixel values and previous fixation estimates in the video. To evaluate the performance of our approach, we present a new comprehensive database of 2D viewing eye-fixation ground-truth for RGBD videos. Our experiments indicate that it is beneficial to integrate depth into video saliency estimates for content that is viewed on a 2D display. We demonstrate that our approach outperforms state-of-the-art methods for video saliency, achieving 15% relative improvement.",
"title": ""
},
{
"docid": "4cae8749b6d12f38ddf8e4c26bb15b53",
"text": "The developments in monitor technology have accelerated in recent years, acquiring a new dimension. The use of liquid crystal display (LCD) and light emitting diode (LED) monitors has rapidly reduced the use of cathode ray tube (CRT) technology in computers and televisions (TVs). As a result, such devices have accumulated as electronic waste and constitute a new problem. Large parts of electronic waste can be recycled for reuse. However, some types of waste, such as CRT TVs and computer monitors, form hazardous waste piles due to the toxic components (lead, barium, strontium) they contain. CRT monitors contain different types of glass constructions and they can therefore be recycled. However, the toxic substances they contain prevent them from being transformed into glass for everyday use. Furthermore, because CRT technology is obsolete, it is not profitable to use CRT as a raw material again. For this reason, poisonous components in glass ceramic structures found in CRT monitors can be confined and used in closed-loop recycling for various sectors.",
"title": ""
},
{
"docid": "0e4722012aeed8dc356aa8c49da8c74f",
"text": "The Android software stack for mobile devices defines and enforces its own security model for apps through its application-layer permissions model. However, at its foundation, Android relies upon the Linux kernel to protect the system from malicious or flawed apps and to isolate apps from one another. At present, Android leverages Linux discretionary access control (DAC) to enforce these guarantees, despite the known shortcomings of DAC. In this paper, we motivate and describe our work to bring flexible mandatory access control (MAC) to Android by enabling the effective use of Security Enhanced Linux (SELinux) for kernel-level MAC and by developing a set of middleware MAC extensions to the Android permissions model. We then demonstrate the benefits of our security enhancements for Android through a detailed analysis of how they mitigate a number of previously published exploits and vulnerabilities for Android. Finally, we evaluate the overheads imposed by our security enhancements.",
"title": ""
},
{
"docid": "64bd2fc0d1b41574046340833144dabe",
"text": "Probe-based confocal laser endomicroscopy (pCLE) provides high-resolution in vivo imaging for intraoperative tissue characterization. Maintaining a desired contact force between target tissue and the pCLE probe is important for image consistency, allowing large area surveillance to be performed. A hand-held instrument that can provide a predetermined contact force to obtain consistent images has been developed. The main components of the instrument include a linear voice coil actuator, a donut load-cell, and a pCLE probe. In this paper, detailed mechanical design of the instrument is presented and system level modeling of closed-loop force control of the actuator is provided. The performance of the instrument has been evaluated in bench tests as well as in hand-held experiments. Results demonstrate that the instrument ensures a consistent predetermined contact force between pCLE probe tip and tissue. Furthermore, it compensates for both simulated physiological movement of the tissue and involuntary movements of the operator's hand. Using pCLE video feature tracking of large colonic crypts within the mucosal surface, the steadiness of the tissue images obtained using the instrument force control is demonstrated by confirming minimal crypt translation.",
"title": ""
},
{
"docid": "b987f831f4174ad5d06882040769b1ac",
"text": "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. 1 Summary Application trends, device technologies and the architecture of systems drive progress in information technologies. However,",
"title": ""
},
{
"docid": "ff5700d97ad00fcfb908d90b56f6033f",
"text": "How to design a secure steganography method is the problem that researchers have always been concerned about. Traditionally, the steganography method is designed in a heuristic way which does not take into account the detection side (steganalysis) fully and automatically. In this paper, we propose a new strategy that generates more suitable and secure covers for steganography with adversarial learning scheme, named SSGAN. The proposed architecture has one generative network called G, and two discriminative networks called D and S, among which the former evaluates the visual quality of the generated images for steganography and the latter assesses their suitableness for information hiding. Different from the existing work, we use WGAN instead of GAN for the sake of faster convergence speed, more stable training, and higher quality images, and also re-design the S net with more sophisticated steganalysis network. The experimental results prove the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "2de8df231b5af77cfd141e26fb7a3ace",
"text": "A significant challenge for the practical application of reinforcement learning in the real world is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a “prior” that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.",
"title": ""
},
{
"docid": "c4a74726ac56b0127e5920098e6f0258",
"text": "BACKGROUND\nAttention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.\n\n\nOBJECTIVES\nWe examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.\n\n\nMETHODS\nWe reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms \"ADHD, diagnosis, outcomes.\" We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.\n\n\nRESULTS\nMultiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.\n\n\nCONCLUSION\nADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder.",
"title": ""
},
{
"docid": "082630a33c0cc0de0e60a549fc57d8e8",
"text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.",
"title": ""
},
{
"docid": "46a55d7a3349f7228acb226ed7875dc9",
"text": "Previous research on driver drowsiness detection has focused primarily on lane deviation metrics and high levels of fatigue. The present research sought to develop a method for detecting driver drowsiness at more moderate levels of fatigue, well before accident risk is imminent. Eighty-seven different driver drowsiness detection metrics proposed in the literature were evaluated in two simulated shift work studies with high-fidelity simulator driving in a controlled laboratory environment. Twenty-nine participants were subjected to a night shift condition, which resulted in moderate levels of fatigue; 12 participants were in a day shift condition, which served as control. Ten simulated work days in the study design each included four 30-min driving sessions, during which participants drove a standardized scenario of rural highways. Ten straight and uneventful road segments in each driving session were designated to extract the 87 different driving metrics being evaluated. The dimensionality of the overall data set across all participants, all driving sessions and all road segments was reduced with principal component analysis, which revealed that there were two dominant dimensions: measures of steering wheel variability and measures of lateral lane position variability. The latter correlated most with an independent measure of fatigue, namely performance on a psychomotor vigilance test administered prior to each drive. We replicated our findings across eight curved road segments used for validation in each driving session. Furthermore, we showed that lateral lane position variability could be derived from measured changes in steering wheel angle through a transfer function, reflecting how steering wheel movements change vehicle heading in accordance with the forces acting on the vehicle and the road. This is important given that traditional video-based lane tracking technology is prone to data loss when lane markers are missing, when weather conditions are bad, or in darkness. Our research findings indicated that steering wheel variability provides a basis for developing a cost-effective and easy-to-install alternative technology for in-vehicle driver drowsiness detection at moderate levels of fatigue.",
"title": ""
},
{
"docid": "34bf7fb014f5b511943526c28407cb4b",
"text": "Mobile devices can be maliciously exploited to violate the privacy of people. In most attack scenarios, the adversary takes the local or remote control of the mobile device, by leveraging a vulnerability of the system, hence sending back the collected information to some remote web service. In this paper, we consider a different adversary, who does not interact actively with the mobile device, but he is able to eavesdrop the network traffic of the device from the network side (e.g., controlling a Wi-Fi access point). The fact that the network traffic is often encrypted makes the attack even more challenging. In this paper, we investigate to what extent such an external attacker can identify the specific actions that a user is performing on her mobile apps. We design a system that achieves this goal using advanced machine learning techniques. We built a complete implementation of this system, and we also run a thorough set of experiments, which show that our attack can achieve accuracy and precision higher than 95%, for most of the considered actions. We compared our solution with the three state-of-the-art algorithms, and confirming that our system outperforms all these direct competitors.",
"title": ""
},
{
"docid": "65eb604a2d45f29923ba24976130adc1",
"text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.",
"title": ""
},
{
"docid": "6ae9da259125e0173f41fa3506641ca4",
"text": "We study the Maximum Weighted Matching problem in a partial information setting where the agents’ utilities for being matched to other agents are hidden and the mechanism only has access to ordinal preference information. Our model is motivated by the fact that in many settings, agents cannot express the numerical values of their utility for different outcomes, but are still able to rank the outcomes in their order of preference. Specifically, we study problems where the ground truth exists in the form of a weighted graph, and look to design algorithms that approximate the true optimum matching using only the preference orderings for each agent (induced by the hidden weights) as input. If no restrictions are placed on the weights, then one cannot hope to do better than the simple greedy algorithm, which yields a half optimal matching. Perhaps surprisingly, we show that by imposing a little structure on the weights, we can improve upon the trivial algorithm significantly: we design a 1.6-approximation algorithm for instances where the hidden weights obey the metric inequality. Our algorithm is obtained using a simple but powerful framework that allows us to combine greedy and random techniques in unconventional ways. These results are the first non-trivial ordinal approximation algorithms for such problems, and indicate that we can design robust matchings even when we are agnostic to the precise agent utilities.",
"title": ""
},
{
"docid": "b358f6c5813fa10c76e1f04827a2696e",
"text": "Information dispersal addresses the question of storing a file by distributing it among a set of servers in a storage-efficient way. We introduce the problem of verifiable information dispersal in an asynchronous network, where up to one third of the servers as well as an arbitrary number of clients might exhibit Byzantine faults. Verifiability ensures that the stored information is consistent despite such faults. We present a storage and communication-efficient scheme for asynchronous verifiable information dispersal that achieves an asymptotically optimal storage blow-up. Additionally, we show how to guarantee the secrecy of the stored data with respect to an adversary that may mount adaptive attacks. Our technique also yields a new protocol for asynchronous reliable broadcast that improves the communication complexity by an order of magnitude on large inputs.",
"title": ""
},
{
"docid": "1758a09dd2653145a21eb318a4029b3c",
"text": "This work describes our solution in the second edition of the ChaLearn LAP competition on Apparent Age Estimation. Starting from a pretrained version of the VGG-16 convolutional neural network for face recognition, we train it on the huge IMDB-Wiki dataset for biological age estimation and then fine-tune it for apparent age estimation using the relatively small competition dataset. We show that the precise age estimation of children is the cornerstone of the competition. Therefore, we integrate a separate \"children\" VGG-16 network for apparent age estimation of children between 0 and 12 years old in our final solution. The \"children\" network is fine-tuned from the \"general\" one. We employ different age encoding strategies for training \"general\" and \"children\" networks: the soft one (label distribution encoding) for the \"general\" network and the strict one (0/1 classification encoding) for the \"children\" network. Finally, we highlight the importance of the state-of-the-art face detection and face alignment for the final apparent age estimation. Our resulting solution wins the 1st place in the competition significantly outperforming the runner-up.",
"title": ""
}
] |
scidocsrr
|
3e3245e4472042e11325e56f1119c801
|
Analyzing the Blogosphere for Predicting the Success of Music and Movie Products
|
[
{
"docid": "e033eddbc92ee813ffcc69724e55aa84",
"text": "Over the past few years, weblogs have emerged as a new communication and publication medium on the Internet. In this paper, we describe the application of data mining, information extraction and NLP algorithms for discovering trends across our subset of approximately 100,000 weblogs. We publish daily lists of key persons, key phrases, and key paragraphs to a public web site, BlogPulse.com. In addition, we maintain a searchable index of weblog entries. On top of the search index, we have implemented trend search, which graphs the normalized trend line over time for a search query and provides a way to estimate the relative buzz of word of mouth for given topics over time.",
"title": ""
}
] |
[
{
"docid": "55fcc765be689166b0a44eef1a8f26b6",
"text": "A key goal of computer vision researchers is to create automated face recognition systems that can equal, and eventually surpass, human performance. To this end, it is imperative that computational researchers know of the key findings from experimental studies of face recognition by humans. These findings provide insights into the nature of cues that the human visual system relies upon for achieving its impressive performance and serve as the building blocks for efforts to artificially emulate these abilities. In this paper, we present what we believe are 19 basic results, with implications for the design of computational systems. Each result is described briefly and appropriate pointers are provided to permit an in-depth study of any particular result",
"title": ""
},
{
"docid": "2c92d42311f9708b7cb40f34551315e0",
"text": "This work characterizes electromagnetic excitation forces in interior permanent-magnet (IPM) brushless direct current (BLDC) motors and investigates their effects on noise and vibration. First, the electromagnetic excitations are classified into three sources: 1) so-called cogging torque, for which we propose an efficient technique of computation that takes into account saturation effects as a function of rotor position; 2) ripples of mutual and reluctance torque, for which we develop an equation to characterize the combination of space harmonics of inductances and flux linkages related to permanent magnets and time harmonics of current; and 3) fluctuation of attractive forces in the radial direction between the stator and rotor, for which we analyze contributions of electric currents as well as permanent magnets by the finite-element method. Then, the paper reports on an experimental investigation of influences of structural dynamic characteristics such as natural frequencies and mode shapes, as well as electromagnetic excitation forces, on noise and vibration in an IPM motor used in washing machines.",
"title": ""
},
{
"docid": "cefabe1b4193483d258739674b53f773",
"text": "This paper describes design and development of omnidirectional magnetic climbing robots with high maneuverability for inspection of ferromagnetic 3D human made structures. The main focus of this article is design, analysis and implementation of magnetic omnidirectional wheels for climbing robots. We discuss the effect of the associated problems of such wheels, e.g. vibration, on climbing robots. This paper also describes the evolution of magnetic omnidirectional wheels throughout the design and development of several solutions, resulting in lighter and smaller wheels which have less vibration and adapt better to smaller radius structures. These wheels are installed on a chassis which adapts passively to flat and curved structures, enabling the robot to climb and navigate on such structures.",
"title": ""
},
{
"docid": "b3d915b4ff4d86b8c987b760fcf7d525",
"text": "We examine how exercising control over a technology platform can increase profits and innovation. Benefits depend on using a platform as a governance mechanism to influence ecosystem parters. Results can inform innovation strategy, antitrust and intellectual property law, and management of competition.",
"title": ""
},
{
"docid": "26a599c22c173f061b5d9579f90fd888",
"text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto",
"title": ""
},
{
"docid": "1de568efbb57cc4e5d5ffbbfaf8d39ae",
"text": "The Insider Threat Study, conducted by the U.S. Secret Service and Carnegie Mellon University’s Software Engineering Institute CERT Program, analyzed insider cyber crimes across U.S. critical infrastructure sectors. The study indicates that management decisions related to organizational and employee performance sometimes yield unintended consequences magnifying risk of insider attack. Lack of tools for understanding insider threat, analyzing risk mitigation alternatives, and communicating results exacerbates the problem. The goal of Carnegie Mellon University’s MERIT (Management and Education of the Risk of Insider Threat) project is to develop such tools. MERIT uses system dynamics to model and analyze insider threats and produce interactive learning environments. These tools can be used by policy makers, security officers, information technology, human resources, and management to understand the problem and assess risk from insiders based on simulations of policies, cultural, technical, and procedural factors. This paper describes the MERIT insider threat model and simulation results.",
"title": ""
},
{
"docid": "013e96c212f7f58698acdae0adfcf374",
"text": "Since our ability to engineer biological systems is directly related to our ability to control gene expression, a central focus of synthetic biology has been to develop programmable genetic regulatory systems. Researchers are increasingly turning to RNA regulators for this task because of their versatility, and the emergence of new powerful RNA design principles. Here we review advances that are transforming the way we use RNAs to engineer biological systems. First, we examine new designable RNA mechanisms that are enabling large libraries of regulators with protein-like dynamic ranges. Next, we review emerging applications, from RNA genetic circuits to molecular diagnostics. Finally, we describe new experimental and computational tools that promise to accelerate our understanding of RNA folding, function and design.",
"title": ""
},
{
"docid": "a41bb1fe5670cc865bf540b34848f45f",
"text": "The general idea of discovering knowledge in large amounts of data is both appealing and intuitive. Typically we focus our attention on learning algorithms, which provide the core capability of generalizing from large numbers of small, very specific facts to useful high-level rules; these learning techniques seem to hold the most excitement and perhaps the most substantive scientific content in the knowledge discovery in databases (KDD) enterprise. However, when we engage in real-world discovery tasks, we find that they can be extremely complex, and that induction of rules is only one small part of the overall process. While others have written overviews of \"the concept of KDD, and even provided block diagrams for \"knowledge discovery systems,\" no one has begun to identify all of the building blocks in a realistic KDD process. This is what we attempt to do here. Besides bringing into the discussion several parts of the process that have received inadequate attention in the KDD community, a careful elucidation of the steps in a realistic knowledge discovery process can provide a framework for comparison of different technologies and tools that are almost impossible to compare without a clean model.",
"title": ""
},
{
"docid": "906ef2b4130ff5c264835ff3c15918e5",
"text": "Exploratory big data applications often run on raw unstructured or semi-structured data formats, such as JSON files or text logs. These applications can spend 80–90% of their execution time parsing the data. In this paper, we propose a new approach for reducing this overhead: apply filters on the data’s raw bytestream before parsing. This technique, which we call raw filtering, leverages the features of modern hardware and the high selectivity of queries found in many exploratory applications. With raw filtering, a user-specified query predicate is compiled into a set of filtering primitives called raw filters (RFs). RFs are fast, SIMD-based operators that occasionally yield false positives, but never false negatives. We combine multiple RFs into an RF cascade to decrease the false positive rate and maximize parsing throughput. Because the best RF cascade is datadependent, we propose an optimizer that dynamically selects the combination of RFs with the best expected throughput, achieving within 10% of the global optimum cascade while adding less than 1.2% overhead. We implement these techniques in a system called Sparser, which automatically manages a parsing cascade given a data stream in a supported format (e.g., JSON, Avro, Parquet) and a user query. We show that many real-world applications are highly selective and benefit from Sparser. Across diverse workloads, Sparser accelerates state-of-the-art parsers such as Mison by up to 22× and improves end-to-end application performance by up to 9×. PVLDB Reference Format: S. Palkar, F. Abuzaid, P. Bailis, M. Zaharia. Filter Before You Parse: Faster Analytics on Raw Data with Sparser. PVLDB, 11(11): xxxx-yyyy, 2018. DOI: https://doi.org/10.14778/3236187.3236207",
"title": ""
},
{
"docid": "6cf4994b5ed0e17885f229856b7cd58d",
"text": "Recently Neural Architecture Search (NAS) has aroused great interest in both academia and industry, however it remains challenging because of its huge and non-continuous search space. Instead of applying evolutionary algorithm or reinforcement learning as previous works, this paper proposes a Direct Sparse Optimization NAS (DSO-NAS) method. In DSO-NAS, we provide a novel model pruning view to NAS problem. In specific, we start from a completely connected block, and then introduce scaling factors to scale the information flow between operations. Next, we impose sparse regularizations to prune useless connections in the architecture. Lastly, we derive an efficient and theoretically sound optimization method to solve it. Our method enjoys both advantages of differentiability and efficiency, therefore can be directly applied to large datasets like ImageNet. Particularly, On CIFAR-10 dataset, DSO-NAS achieves an average test error 2.84%, while on the ImageNet dataset DSO-NAS achieves 25.4% test error under 600M FLOPs with 8 GPUs in 18 hours.",
"title": ""
},
{
"docid": "a74081f7108e62fadb48446255dd246b",
"text": "Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep fuzzy neural network, namely deep evolving fuzzy neural networks (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play little role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely Generic Classifier (gClass), drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of input space dimension due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using six datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four state-ofthe art data stream methods and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.",
"title": ""
},
{
"docid": "dfa62c69b1ab26e7e160100b69794674",
"text": "Canonical correlation analysis (CCA) is a well established technique for identifying linear relationships among two variable sets. Kernel CCA (KCCA) is the most notable nonlinear extension but it lacks interpretability and robustness against irrelevant features. The aim of this article is to introduce two nonlinear CCA extensions that rely on the recently proposed Hilbert-Schmidt independence criterion and the centered kernel target alignment. These extensions determine linear projections that provide maximally dependent projected data pairs. The paper demonstrates that the use of linear projections allows removing irrelevant features, whilst extracting combinations of strongly associated features. This is exemplified through a simulation and the analysis of recorded data that are available in the literature.",
"title": ""
},
{
"docid": "498a4b526633c06d6eac9aa52ff5e1d2",
"text": "This talk surveys three challenge areas for mechanism design and describes the role approximation plays in resolving them. Challenge 1: optimal mechanisms are parameterized by knowledge of the distribution of agent's private types. Challenge 2: optimal mechanisms require precise distributional information. Challenge 3: in multi-dimensional settings economic analysis has failed to characterize optimal mechanisms. The theory of approximation is well suited to address these challenges. While the optimal mechanism may be parameterized by the distribution of agent's private types, there may be a single mechanism that approximates the optimal mechanism for any distribution. While the optimal mechanism may require precise distributional assumptions, there may be approximately optimal mechanism that depends only on natural characteristics of the distribution. While the multi-dimensional optimal mechanism may resist precise economic characterization, there may be simple description of approximately optimal mechanisms. Finally, these approximately optimal mechanisms, because of their simplicity and tractability, may be much more likely to arise in practice, thus making the theory of approximately optimal mechanism more descriptive than that of (precisely) optimal mechanisms. The talk will cover positive resolutions to these challenges with emphasis on basic techniques, relevance to practice, and future research directions.",
"title": ""
},
{
"docid": "f175e9c17aa38a17253de2663c4999f1",
"text": "As we increasingly rely on computers to process and manage our personal data, safeguarding sensitive information from malicious hackers is a fast growing concern. Among many forms of information leakage, covert timing channels operate by establishing an illegitimate communication channel between two processes and through transmitting information via timing modulation, thereby violating the underlying system's security policy. Recent studies have shown the vulnerability of popular computing environments, such as cloud computing, to these covert timing channels. In this work, we propose a new micro architecture-level framework, CC-Hunter, that detects the possible presence of covert timing channels on shared hardware. Our experiments demonstrate that Chanter is able to successfully detect different types of covert timing channels at varying bandwidths and message patterns.",
"title": ""
},
{
"docid": "1f4ff9d732b3512ee9b105f084edd3d2",
"text": "Today, as Network environments become more complex and cyber and Network threats increase, Organizations use wide variety of security solutions against today's threats. For proper and centralized control and management, range of security features need to be integrated into unified security package. Unified threat management (UTM) as a comprehensive network security solution, integrates all of security services such as firewall, URL filtering, virtual private networking, etc. in a single appliance. PfSense is a variant of UTM, and a customized FreeBSD (Unix-like operating system). Specially is used as a router and statefull firewall. It has many packages extend it's capabilities such as Squid3 package as a as a proxy server that cache data and SquidGuard, redirector and access controller plugin for squid3 proxy server. In this paper, with implementing UTM based on PfSense platform we use Squid3 proxy server and SquidGuard proxy filter to avoid extreme amount of unwanted uploading/ downloading over the internet by users in order to optimize our organization's bandwidth consumption. We begin by defining UTM and types of it, PfSense platform with it's key services and introduce a simple and operational solution for security stability and reducing the cost. Finally, results and statistics derived from this approach compared with the prior condition without PfSense platform.",
"title": ""
},
{
"docid": "074d4a552c82511d942a58b93d51c38a",
"text": "This is a survey of neural network applications in the real-world scenario. It provides a taxonomy of artificial neural networks (ANNs) and furnish the reader with knowledge of current and emerging trends in ANN applications research and area of focus for researchers. Additionally, the study presents ANN application challenges, contributions, compare performances and critiques methods. The study covers many applications of ANN techniques in various disciplines which include computing, science, engineering, medicine, environmental, agriculture, mining, technology, climate, business, arts, and nanotechnology, etc. The study assesses ANN contributions, compare performances and critiques methods. The study found that neural-network models such as feedforward and feedback propagation artificial neural networks are performing better in its application to human problems. Therefore, we proposed feedforward and feedback propagation ANN models for research focus based on data analysis factors like accuracy, processing speed, latency, fault tolerance, volume, scalability, convergence, and performance. Moreover, we recommend that instead of applying a single method, future research can focus on combining ANN models into one network-wide application.",
"title": ""
},
{
"docid": "ec5d4c571f8cd85bf94784199ab10884",
"text": "Researchers have shown that a wordnet for a new language, possibly resource-poor, can be constructed automatically by translating wordnets of resource-rich languages. The quality of these constructed wordnets is affected by the quality of the resources used such as dictionaries and translation methods in the construction process. Recent work shows that vector representation of words (word embeddings) can be used to discover related words in text. In this paper, we propose a method that performs such similarity computation using word embeddings to improve the quality of automatically constructed wordnets.",
"title": ""
},
{
"docid": "6773b060fd16b6630f581eb65c5c6488",
"text": "Proximity detection is one of the most common location-based applications in daily life when users intent to find their friends who get into their proximity. Studies on protecting user privacy information during the detection process have been widely concerned. In this paper, we first analyze a theoretical and experimental analysis of existing solutions for proximity detection, and then demonstrate that these solutions either provide a weak privacy preserving or result in a high communication and computational complexity. Accordingly, a location difference-based proximity detection protocol is proposed based on the Paillier cryptosystem for the purpose of dealing with the above shortcomings. The analysis results through an extensive simulation illustrate that our protocol outperforms traditional protocols in terms of communication and computation cost.",
"title": ""
},
{
"docid": "3e28cbfc53f6c42bb0de2baf5c1544aa",
"text": "Cloud computing is an emerging paradigm which allows the on-demand delivering of software, hardware, and data as services. As cloud-based services are more numerous and dynamic, the development of efficient service provisioning policies become increasingly challenging. Game theoretic approaches have shown to gain a thorough analytical understanding of the service provisioning problem.\n In this paper we take the perspective of Software as a Service (SaaS) providers which host their applications at an Infrastructure as a Service (IaaS) provider. Each SaaS needs to comply with quality of service requirements, specified in Service Level Agreement (SLA) contracts with the end-users, which determine the revenues and penalties on the basis of the achieved performance level. SaaS providers want to maximize their revenues from SLAs, while minimizing the cost of use of resources supplied by the IaaS provider. Moreover, SaaS providers compete and bid for the use of infrastructural resources. On the other hand, the IaaS wants to maximize the revenues obtained providing virtualized resources. In this paper we model the service provisioning problem as a Generalized Nash game, and we propose an efficient algorithm for the run time management and allocation of IaaS resources to competing SaaSs.",
"title": ""
},
{
"docid": "d67e0fa20185e248a18277e381c9d42d",
"text": "Smartphone security research has produced many useful tools to analyze the privacy-related behaviors of mobile apps. However, these automated tools cannot assess people's perceptions of whether a given action is legitimate, or how that action makes them feel with respect to privacy. For example, automated tools might detect that a blackjack game and a map app both use one's location information, but people would likely view the map's use of that data as more legitimate than the game. Our work introduces a new model for privacy, namely privacy as expectations. We report on the results of using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use. We also report on a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations. We conclude with a discussion of implications for employing crowdsourcing as a privacy evaluation technique.",
"title": ""
}
] |
scidocsrr
|
259ed9eed850bd92677c3cc46029f478
|
Mining Potential Domain Expertise in Pinterest
|
[
{
"docid": "ccf40417ca3858d69c4cd3fd031ea7c1",
"text": "Online social networks (OSNs) have become popular platforms for people to connect and interact with each other. Among those networks, Pinterest has recently become noteworthy for its growth and promotion of visual over textual content. The purpose of this study is to analyze this imagebased network in a gender-sensitive fashion, in order to understand (i) user motivation and usage pattern in the network, (ii) how communications and social interactions happen and (iii) how users describe themselves to others. This work is based on more than 220 million items generated by 683,273 users. We were able to find significant differences w.r.t. all mentioned aspects. We observed that, although the network does not encourage direct social communication, females make more use of lightweight interactions than males. Moreover, females invest more effort in reciprocating social links, are more active and generalist in content generation, and describe themselves using words of affection and positive emotions. Males, on the other hand, are more likely to be specialists and tend to describe themselves in an assertive way. We also observed that each gender has different interests in the network, females tend to make more use of the network’s commercial capabilities, while males are more prone to the role of curators of items that reflect their personal taste. It is important to understand gender differences in online social networks, so one can design services and applications that leverage human social interactions and provide more targeted and relevant user experiences.",
"title": ""
},
{
"docid": "f9b01c707482eebb9af472fd019f56eb",
"text": "In this paper we discuss the task of discovering topical influ ence within the online social network T WITTER. The main goal of this research is to discover who the influenti al users are with respect to a certain given topic. For this research we have sampled a portion of the T WIT ER social graph, from which we have distilled topics and topical activity, and constructed a se t of diverse features which we believe are useful in capturing the concept of topical influence. We will use sev eral correlation and classification techniques to determine which features perform best with respect to the TWITTER network. Our findings support the claim that only looking at simple popularity features such a s the number of followers is not enough to capture the concept of topical influence. It appears that mor e int icate features are required.",
"title": ""
}
] |
[
{
"docid": "d315aa25c69ad39164c458dabe914417",
"text": "The increase of scientific collaboration coincides with the technological and social advancement of social software applications which can change the way we research. Among social software, social network sites have recently gained immense popularity in a hedonic context. This paper focuses on social network sites as an emerging application designed for the specific needs of researchers. To give an overview about these sites we use a data set of 24 case studies and in-depth interviews with the founders of ten social research network sites. The gathered data leads to a first tentative taxonomy and to a definition of SRNS identifying four basic functionalities identity and network management, communication, information management, and collaboration. The sites in the sample correspond to one of the following four types: research directory sites, research awareness sites, research management sites and research collaboration sites. These results conclude with implications for providers of social research network sites.",
"title": ""
},
{
"docid": "3b07476ebb8b1d22949ec32fc42d2d05",
"text": "We provide a systematic review of the adaptive comanagement (ACM) literature to (i) investigate how the concept of governance is considered and (ii) examine what insights ACM offers with reference to six key concerns in environmental governance literature: accountability and legitimacy; actors and roles; fit, interplay, and scale; adaptiveness, flexibility, and learning; evaluation and monitoring; and, knowledge. Findings from the systematic review uncover a complicated relationship with evidence of conceptual closeness as well as relational ambiguities. The findings also reveal several specific contributions from the ACM literature to each of the six key environmental governance concerns, including applied strategies for sharing power and responsibility and value of systems approaches in understanding problems of fit. More broadly, the research suggests a dissolving or fuzzy boundary between ACM and governance, with implications for understanding emerging approaches to navigate social-ecological system change. Future research opportunities may be found at the confluence of ACM and environmental governance scholarship, such as identifying ways to build adaptive capacity and encouraging the development of more flexible governance arrangements.",
"title": ""
},
{
"docid": "9d33565dbd5148730094a165bb2e968f",
"text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.",
"title": ""
},
{
"docid": "0e48de6dc8d1f51eb2a7844d4d67b8f5",
"text": "Vygotsky asserted that the student who had mastered algebra had attained “a new higher plane of thought”, a level of abstraction and generalization which transformed the meaning of the lower (arithmetic) level. He also affirmed the importance of the mastery of scientific concepts for the development of the ability to think theoretically, and emphasized the mediating role of semiotic forms and symbol systems in developing this ability. Although historically in mathematics and traditionally in education, algebra followed arithmetic, Vygotskian theory supports the reversal of this sequence in the service of orienting children to the most abstract and general level of understanding initially. This organization of learning activity for the development of algebraic thinking is very different from the introduction of elements of algebra into the study of arithmetic in the early grades. The intended theoretical (algebraic) understanding is attained through appropriation of psychological tools, in the form of specially designed schematics, whose mastery is not merely incidental to but the explicit focus of instruction. The author’s research in implementing Davydov’s Vygotskian-based elementary mathematics curriculum in the U.S. suggests that these characteristics function synergistically to develop algebraic understanding and computational competence as well. Kurzreferat: Vygotsky ging davon aus, dass Lernende, denen es gelingt, Algebra zu beherrschen, „ein höheres gedankliches Niveau” erreicht hätten, eine Ebene von Abstraktion und Generalisierung, welche die Bedeutung der niederen (arithmetischen) Ebene verändert. Er bestätigte auch die Relevanz der Beherrschung von wissenschaftlichen Begriffen für die Entwicklung der Fähigkeit, theoretisch zu denken und betonte dabei die vermittelnde Rolle von semiotischen Formen und Symbolsystemen für die Ausformung dieser Fähigkeit. Obwohl mathematik-his tor isch und t radi t ionel l erziehungswissenschaftlich betrachtet, Algebra der Arithmetik folgte, stützt Vygotski’s Theorie die Umkehrung dieser Sequenz bei dem Bemühen, Kinder an das abstrakteste und allgemeinste Niveau des ersten Verstehens heranzuführen. Diese Organisation von Lernaktivitäten für die Ausbildung algebraischen Denkens unterscheidet sich erheblich von der Einführung von Algebra-Elementen in das Lernen von Arithmetik während der ersten Schuljahre. Das beabsichtigte theoretische (algebraische) Verstehen wird erreicht durch die Aneignung psychologischer Mittel, und zwar in Form von dafür speziell entwickelten Schemata, deren Beherrschung nicht nur beiläufig erfolgt, sondern Schwerpunkt des Unterrichts ist. Die im Beitrag beschriebenen Forschungen zur Implementierung von Davydov’s elementarmathematischen Curriculum in den Vereinigten Staaten, das auf Vygotsky basiert, legt die Vermutung nahe, dass diese Charakteristika bei der Entwicklung von algebraischem Verstehen und von Rechenkompetenzen synergetisch funktionieren. ZDM-Classification: C30, D30, H20 l. Historical Context Russian psychologist Lev Vygotsky stated clearly his perspective on algebraic thinking. Commenting on its development within the structure of the Russian curriculum in the early decades of the twentieth century,",
"title": ""
},
{
"docid": "b3a148abb00e35e59a7d05289595d438",
"text": "CONTEXT\nMajor depressive disorder (MDD) occurs in 15% to 23% of patients with acute coronary syndromes and constitutes an independent risk factor for morbidity and mortality. However, no published evidence exists that antidepressant drugs are safe or efficacious in patients with unstable ischemic heart disease.\n\n\nOBJECTIVE\nTo evaluate the safety and efficacy of sertraline treatment of MDD in patients hospitalized for acute myocardial infarction (MI) or unstable angina and free of other life-threatening medical conditions.\n\n\nDESIGN AND SETTING\nRandomized, double-blind, placebo-controlled trial conducted in 40 outpatient cardiology centers and psychiatry clinics in the United States, Europe, Canada, and Australia. Enrollment began in April 1997 and follow-up ended in April 2001.\n\n\nPATIENTS\nA total of 369 patients with MDD (64% male; mean age, 57.1 years; mean 17-item Hamilton Depression [HAM-D] score, 19.6; MI, 74%; unstable angina, 26%).\n\n\nINTERVENTION\nAfter a 2-week single-blind placebo run-in, patients were randomly assigned to receive sertraline in flexible dosages of 50 to 200 mg/d (n = 186) or placebo (n = 183) for 24 weeks.\n\n\nMAIN OUTCOME MEASURES\nThe primary (safety) outcome measure was change from baseline in left ventricular ejection fraction (LVEF); secondary measures included surrogate cardiac measures and cardiovascular adverse events, as well as scores on the HAM-D scale and Clinical Global Impression Improvement scale (CGI-I) in the total randomized sample, in a group with any prior history of MDD, and in a more severe MDD subgroup defined a priori by a HAM-D score of at least 18 and history of 2 or more prior episodes of MDD.\n\n\nRESULTS\nSertraline had no significant effect on mean (SD) LVEF (sertraline: baseline, 54% [10%]; week 16, 54% [11%]; placebo: baseline, 52% [13%]; week 16, 53% [13%]), treatment-emergent increase in ventricular premature complex (VPC) runs (sertraline: 13.1%; placebo: 12.9%), QTc interval greater than 450 milliseconds at end point (sertraline: 12%; placebo: 13%), or other cardiac measures. All comparisons were statistically nonsignificant (P> or = .05). The incidence of severe cardiovascular adverse events was 14.5% with sertraline and 22.4% with placebo. In the total randomized sample, the CGI-I (P =.049), but not the HAM-D (P =.14), favored sertraline. The CGI-I responder rates for sertraline were significantly higher than for placebo in the total sample (67% vs 53%; P =.01), in the group with at least 1 prior episode of depression (72% vs 51%; P =.003), and in the more severe MDD group (78% vs 45%; P =.001). In the latter 2 groups, both CGI-I and HAM-D measures were significantly better in those assigned to sertraline.\n\n\nCONCLUSION\nOur results suggest that sertraline is a safe and effective treatment for recurrent depression in patients with recent MI or unstable angina and without other life-threatening medical conditions.",
"title": ""
},
{
"docid": "c12d27988e70e9b3e6987ca2f0ca8bca",
"text": "In this tutorial, we introduce the basic theory behind Stega nography and Steganalysis, and present some recent algorithms and devel opm nts of these fields. We show how the existing techniques used nowadays are relate d to Image Processing and Computer Vision, point out several trendy applicati ons of Steganography and Steganalysis, and list a few great research opportunities j ust waiting to be addressed.",
"title": ""
},
{
"docid": "6247c827c6fdbc976b900e69a9eb275c",
"text": "Despite the fact that commercial computer systems have been in existence for almost three decades, many systems in the process of being implemented may be classed as failures. One of the factors frequently cited as important to successful system development is involving users in the design and implementation process. This paper reports the results of a field study, conducted on data from forty-two systems, that investigates the role of user involvement and factors affecting the employment of user involvement on the success of system development. Path analysis was used to investigate both the direct effects of the contingent variables on system success and the effect of user involvement as a mediating variable between the contingent variables and system success. The results show that high system complexity and constraints on the resources available for system development are associated with less successful systems.",
"title": ""
},
{
"docid": "0fd7a70c0d46100d32e0bcb0f65528e3",
"text": "INTRODUCTION Document clustering is an automatic grouping of text documents into clusters so that documents within a cluster have high similarity in comparison to one another, but are dissimilar to documents in other clusters. Unlike document classification (Wang, Zhou, and He, 2001), no labeled documents are provided in clustering; hence, clustering is also known as unsupervised learning. Hierarchical document clustering organizes clusters into a tree or a hierarchy that facilitates browsing. The parent-child relationship among the nodes in the tree can be viewed as a topic-subtopic relationship in a subject hierarchy such as the Yahoo! directory. This chapter discusses several special challenges in hierarchical document clustering: high dimensionality, high volume of data, ease of browsing, and meaningful cluster labels. State-ofthe-art document clustering algorithms are reviewed: the partitioning method (Steinbach, Karypis, and Kumar, 2000), agglomerative and divisive hierarchical clustering (Kaufman and Rousseeuw, 1990), and frequent itemset-based hierarchical clustering (Fung, Wang, and Ester, 2003). The last one, which was recently developed by the authors, is further elaborated since it has been specially designed to address the hierarchical document clustering problem.",
"title": ""
},
{
"docid": "add30dc8d14a26eba48dbe5baaaf4169",
"text": "The authors investigated whether intensive musical experience leads to enhancements in executive processing, as has been shown for bilingualism. Young adults who were bilinguals, musical performers (instrumentalists or vocalists), or neither completed 3 cognitive measures and 2 executive function tasks based on conflict. Both executive function tasks included control conditions that assessed performance in the absence of conflict. All participants performed equivalently for the cognitive measures and the control conditions of the executive function tasks, but performance diverged in the conflict conditions. In a version of the Simon task involving spatial conflict between a target cue and its position, bilinguals and musicians outperformed monolinguals, replicating earlier research with bilinguals. In a version of the Stroop task involving auditory and linguistic conflict between a word and its pitch, the musicians performed better than the other participants. Instrumentalists and vocalists did not differ on any measure. Results demonstrate that extended musical experience enhances executive control on a nonverbal spatial task, as previously shown for bilingualism, but also enhances control in a more specialized auditory task, although the effect of bilingualism did not extend to that domain.",
"title": ""
},
{
"docid": "2643c7960df0aed773aeca6e04fde67e",
"text": "Many studies utilizing dogs, cats, birds, fish, and robotic simulations of animals have tried to ascertain the health benefits of pet ownership or animal-assisted therapy in the elderly. Several small unblinded investigations outlined improvements in behavior in demented persons given treatment in the presence of animals. Studies piloting the use of animals in the treatment of depression and schizophrenia have yielded mixed results. Animals may provide intangible benefits to the mental health of older persons, such as relief social isolation and boredom, but these have not been formally studied. Several investigations of the effect of pets on physical health suggest animals can lower blood pressure, and dog walkers partake in more physical activity. Dog walking, in epidemiological studies and few preliminary trials, is associated with lower complication risk among patients with cardiovascular disease. Pets may also have harms: they may be expensive to care for, and their owners are more likely to fall. Theoretically, zoonotic infections and bites can occur, but how often this occurs in the context of pet ownership or animal-assisted therapy is unknown. Despite the poor methodological quality of pet research after decades of study, pet ownership and animal-assisted therapy are likely to continue due to positive subjective feelings many people have toward animals.",
"title": ""
},
{
"docid": "6400b594b7a7624cf638961ee904e7d0",
"text": "As the demands for portable electronic products increase, through-silicon-via (TSV)-based three-dimensional integrated-circuit (3-D IC) integration is becoming increasingly important. Micro-bump-bonded interconnection is one approach that has great potential to meet this requirement. In this paper, a 30-μm pitch chip-to-chip (C2C) interconnection with Cu/Ni/SnAg micro bumps was assembled using the gap-controllable thermal bonding method. The bonding parameters were evaluated by considering the variation in the contact resistance after bonding. The effects of the bonding time and temperature on the IMC thickness of the fabricated C2C interconnects are also investigated to determine the correlation between its thickness and reliability performance. The reliability of the C2C interconnects with the selected underfill was studied by performing a -55°C- 125°C temperature cycling test (TCT) for 2000 cycles and a 150°C high-temperature storage (HTS) test for 2000 h. The interfaces of the failed samples in the TCT and HTS tests are then inspected by scanning electron microscopy (SEM), which is utilized to obtain cross-sectional images. To validate the experimental results, finite-element (FE) analysis is also conducted to elucidate the interconnect reliability of the C2C interconnection. Results show that consistent bonding quality and stable contact resistance of the fine-pitch C2C interconnection with the micro bumps were achieved by giving the appropriate choice of the bonding parameters, and those bonded joints can thus serve as reliable interconnects for use in 3-D chip stacking.",
"title": ""
},
{
"docid": "75d57c2f82fb7852feef4c7bcde41590",
"text": "This paper studies the causal impact of sibling gender composition on participation in Science, Technology, Engineering, and Mathematics (STEM) education. I focus on a sample of first-born children who all have a younger biological sibling, using rich administrative data on the total Danish population. The randomness of the secondborn siblings’ gender allows me to estimate the causal effect of having an opposite sex sibling relative to a same sex sibling. The results are robust to family size and show that having a second-born opposite sex sibling makes first-born men more and women less likely to enroll in a STEM program. Although sibling gender composition has no impact on men’s probability of actually completing a STEM degree, it has a powerful effect on women’s success within these fields: women with a younger brother are eleven percent less likely to complete any field-specific STEM education relative to women with a sister. I provide evidence that parents of mixed sex children gender-specialize their parenting more than parents of same sex children. These findings indicate that the family environment plays in important role for shaping interests in STEM fields. JEL classification: I2, J1, J3",
"title": ""
},
{
"docid": "c7351e8ce6d32b281d5bd33b245939c6",
"text": "In TREC 2002 the Berkeley group participated only in the English-Arabic cross-language retrieval (CLIR) track. One Arabic monolingual run and three English-Arabic cross-language runs were submitted. Our approach to the crosslanguage retrieval was to translate the English topics into Arabic using online English-Arabic machine translation systems. The four official runs are named as BKYMON, BKYCL1, BKYCL2, and BKYCL3. The BKYMON is the Arabic monolingual run, and the other three runs are English-to-Arabic cross-language runs. This paper reports on the construction of an Arabic stoplist and two Arabic stemmers, and the experiments on Arabic monolingual retrieval, English-to-Arabic cross-language retrieval.",
"title": ""
},
{
"docid": "e13798bd8605c3c679f6e72df515d35a",
"text": "After more than a decade of research in Model-Driven Engineering (MDE), the state-of-the-art and the state-of-the-practice in MDE has significantly progressed. Therefore, during this workshop we raised the question of how to proceed next, and we identified a number of future challenges in the field of MDE. The objective of the workshop was to provide a forum for discussing the future of MDE research and practice. Seven presenters shared their vision on the future challenges in the field of MDE. Four breakout groups discussed scalability, consistency and co-evolution, formal foundations, and industrial adoption, respectively. These themes were identified as major categories of challenges by the participants. This report summarises the different presentations, the MDE challenges identified by the workshop participants, and the discussions of the breakout groups.",
"title": ""
},
{
"docid": "cd1bf567e2e8bfbf460abb3ac1a0d4a5",
"text": "Memory channel contention is a critical performance bottleneck in modern systems that have highly parallelized processing units operating on large data sets. The memory channel is contended not only by requests from different user applications (CPU access) but also by system requests for peripheral data (IO access), usually controlled by Direct Memory Access (DMA) engines. Our goal, in this work, is to improve system performance byeliminating memory channel contention between CPU accesses and IO accesses. To this end, we propose a hardware-software cooperative data transfer mechanism, Decoupled DMA (DDMA) that provides a specialized low-cost memory channel for IO accesses. In our DDMA design, main memoryhas two independent data channels, of which one is connected to the processor (CPU channel) and the other to the IO devices (IO channel), enabling CPU and IO accesses to be served on different channels. Systemsoftware or the compiler identifies which requests should be handled on the IO channel and communicates this to the DDMA engine, which then initiates the transfers on the IO channel. By doing so, our proposal increasesthe effective memory channel bandwidth, thereby either accelerating data transfers between system components, or providing opportunities to employ IO performance enhancement techniques (e.g., aggressive IO prefetching)without interfering with CPU accessesWe demonstrate the effectiveness of our DDMA framework in two scenarios: (i) CPU-GPU communication and (ii) in-memory communication (bulk datacopy/initialization within the main memory). By effectively decoupling accesses for CPU-GPU communication and in-memory communication from CPU accesses, our DDMA-based design achieves significant performanceimprovement across a wide variety of system configurations (e.g., 20% average performance improvement on a typical 2-channel 2-rank memory system).",
"title": ""
},
{
"docid": "aecd7a910b52b6e34e10f10a12d0f966",
"text": "Language processing is an example of implicit learning of multiple statistical cues that provide probabilistic information regarding word structure and use. Much of the current debate about language embodiment is devoted to how action words are represented in the brain, with motor cortex activity evoked by these words assumed to selectively reflect conceptual content and/or its simulation. We investigated whether motor cortex activity evoked by manual action words (e.g., caress) might reflect sensitivity to probabilistic orthographic–phonological cues to grammatical category embedded within individual words. We first review neuroimaging data demonstrating that nonwords evoke activity much more reliably than action words along the entire motor strip, encompassing regions proposed to be action category specific. Using fMRI, we found that disyllabic words denoting manual actions evoked increased motor cortex activity compared with non-body-part-related words (e.g., canyon), activity which overlaps that evoked by observing and executing hand movements. This result is typically interpreted in support of language embodiment. Crucially, we also found that disyllabic nonwords containing endings with probabilistic cues predictive of verb status (e.g., -eve) evoked increased activity compared with nonwords with endings predictive of noun status (e.g., -age) in the identical motor area. Thus, motor cortex responses to action words cannot be assumed to selectively reflect conceptual content and/or its simulation. Our results clearly demonstrate motor cortex activity reflects implicit processing of ortho-phonological statistical regularities that help to distinguish a word's grammatical class.",
"title": ""
},
{
"docid": "07c9bf0432e67580b7e19a2889aa80a9",
"text": "We give a detailed account of the one-way quantum computer, a scheme of quantum computation that consists entirely of one-qubit measurements on a particular class of entangled states, the cluster states. We prove its universality, describe why its underlying computational model is different from the network model of quantum computation, and relate quantum algorithms to mathematical graphs. Further we investigate the scaling of required resources and give a number of examples for circuits of practical interest such as the circuit for quantum Fourier transformation and for the quantum adder. Finally, we describe computation with clusters of finite size.",
"title": ""
},
{
"docid": "1afd50a91b67bd1eab0db1c2a19a6c73",
"text": "In this paper we present syntactic characterization of temporal formulas that express various properties of interest in the verification of concurrent programs. Such a characterization helps us in choosing the right techniques for proving correctness with respect to these properties. The properties that we consider include safety properties, liveness properties and fairness properties. We also present algorithms for checking if a given temporal formula expresses any of these properties.",
"title": ""
},
{
"docid": "6be09c03c23168af7d8f21feb905020e",
"text": "Software test effort estimation has always been a challenge for the software practitioners, because it consumes approximately half of the overall development costs of any software project. In order to provide effective software maintenance it is necessary to carry out the regression testing of the software. Hence, this research work aims to propose a measure for the estimation of the software test effort in regression testing. Since, the effort required developing or test software shall depend on various major contributing factors like, therefore, the proposed measure first estimates the change type of any software, make test cases for any software, then calculate execution complexity of any software and tester rank. In general, the regression testing takes more time and cost to perform it. Therefore, the effort estimation in regression testing is utmost required in order to compute man-hour for any software. In order to analyze the validity of the proposed test effort estimation measure, the measure is compared for various ranges of problem from small, mid and large size program to real life software projects. The result obtained shows that, the proposed test measure is a comprehensive one and compares well with other prevalent measures proposed in the past.",
"title": ""
},
{
"docid": "d2225efeffbb885bc9e3e9322c214a2e",
"text": "A 40-Gb/s transimpedance amplifier (TIA) is proposed using multistage inductive-series peaking for low group-delay variation. A transimpedance limit for multistage TIAs is derived, and a bandwidth-enhancement technique using inductive-series π -networks is analyzed. A design method for low group delay constrained to 3-dB bandwidth enhancement is suggested. The TIA is implemented in a 0.13-μm CMOS process and achieves a 3-dB bandwidth of 29 GHz. The transimpedance gain is 50 dB·Ω , and the transimpedance group-delay variation is less than 16 ps over the 3-dB bandwidth. The chip occupies an area of 0.4 mm2, including the pads, and consumes 45.7 mW from a 1.5-V supply. The measured TIA demonstrates a transimpedance figure of merit of 200.7 Ω/pJ.",
"title": ""
}
] |
scidocsrr
|
f747d5351707e12c29021a9b41ca5792
|
Effectiveness of virtual reality-based pain control with multiple treatments.
|
[
{
"docid": "bf6d56c2fd716802b8e2d023f86a4225",
"text": "This is the first case report to demonstrate the efficacy of immersive computer-generated virtual reality (VR) and mixed reality (touching real objects which patients also saw in VR) for the treatment of spider phobia. The subject was a 37-yr-old female with severe and incapacitating fear of spiders. Twelve weekly 1-hr sessions were conducted over a 3-month period. Outcome was assessed on measures of anxiety, avoidance, and changes in behavior toward real spiders. VR graded exposure therapy was successful for reducing fear of spiders providing converging evidence for a growing literature showing the effectiveness of VR as a new medium for exposure therapy.",
"title": ""
}
] |
[
{
"docid": "750846bc27dc013bd0d392959caf3ecc",
"text": "Analysis of the WinZip en ryption method Tadayoshi Kohno May 8, 2004 Abstra t WinZip is a popular ompression utility for Mi rosoft Windows omputers, the latest version of whi h is advertised as having \\easy-to-use AES en ryption to prote t your sensitive data.\" We exhibit several atta ks against WinZip's new en ryption method, dubbed \\AE-2\" or \\Advan ed En ryption, version two.\" We then dis uss se ure alternatives. Sin e at a high level the underlying WinZip en ryption method appears se ure (the ore is exa tly En ryptthen-Authenti ate using AES-CTR and HMAC-SHA1), and sin e one of our atta ks was made possible be ause of the way that WinZip Computing, In . de ided to x a di erent se urity problem with its previous en ryption method AE-1, our atta ks further unders ore the subtlety of designing ryptographi ally se ure software.",
"title": ""
},
{
"docid": "96d8971bf4a8d18f4471019796348e1b",
"text": "Most wired active electrodes reported so far have a gain of one and require at least three wires. This leads to stiff cables, large connectors and additional noise for the amplifier. The theoretical advantages of amplifying the signal on the electrodes right from the source has often been described, however, rarely implemented. This is because a difference in the gain of the electrodes due to component tolerances strongly limits the achievable common mode rejection ratio (CMRR). In this paper, we introduce an amplifier for bioelectric events where the major part of the amplification (40 dB) is achieved on the electrodes to minimize pick-up noise. The electrodes require only two wires of which one can be used for shielding, thus enabling smaller connecters and smoother cables. Saturation of the electrodes is prevented by a dc-offset cancelation scheme with an active range of /spl plusmn/250 mV. This error feedback simultaneously allows to measure the low frequency components down to dc. This enables the measurement of slow varying signals, e.g., the change of alertness or the depolarization before an epileptic seizure normally not visible in a standard electroencephalogram (EEG). The amplifier stage provides the necessary supply current for the electrodes and generates the error signal for the feedback loop. The amplifier generates a pseudodifferential signal where the amplified bioelectric event is present on one lead, but the common mode signal is present on both leads. Based on the pseudodifferential signal we were able to develop a new method to compensate for a difference in the gain of the active electrodes which is purely software based. The amplifier system is then characterized and the input referred noise as well as the CMRR are measured. For the prototype circuit the CMRR evaluated to 78 dB (without the driven-right-leg circuit). The applicability of the system is further demonstrated by the recording of an ECG.",
"title": ""
},
{
"docid": "2006a3fd87a3d7228b2a25061f7eb06b",
"text": "Thailand suffers from frequent flooding during the monsoon season and droughts in summer. In some places, severe cases of both may even occur. Managing water resources effectively requires a good information system for decision-making. There is currently a lack in knowledge sharing between organizations and researchers responsible. These are the experts in monitoring and controlling the water supply and its conditions. The knowledge owned by these experts are not captured, classified and integrated into an information system for decisionmaking. Ontologies are formal knowledge representation models. Knowledge management and artificial intelligence technology is a basic requirement for developing ontology-based semantic search on the Web. In this paper, we present ontology modeling approach that is based on the experiences of the researchers. The ontology for drought management consists of River Basin Ontology, Statistics Ontology and Task Ontology to facilitate semantic match during search. The hybrid ontology architecture can also be used for drought management",
"title": ""
},
{
"docid": "2a987f50527c4b4501ae29493f703e32",
"text": "The emergence of novel techniques for automatic anomaly detection in surveillance videos has significantly reduced the burden of manual processing of large, continuous video streams. However, existing anomaly detection systems suffer from a high false-positive rate and also, are not real-time, which makes them practically redundant. Furthermore, their predefined feature selection techniques limit their application to specific cases. To overcome these shortcomings, a dynamic anomaly detection and localization system is proposed, which uses deep learning to automatically learn relevant features. In this technique, each video is represented as a group of cubic patches for identifying local and global anomalies. A unique sparse denoising autoencoder architecture is used, that significantly reduced the computation time and the number of false positives in frame-level anomaly detection by more than 2.5%. Experimental analysis on two benchmark data sets - UMN dataset and UCSD Pedestrian dataset, show that our algorithm outperforms the state-of-the-art models in terms of false positive rate, while also showing a significant reduction in computation time.",
"title": ""
},
{
"docid": "199d2f3d640fbb976ef27c8d129922ef",
"text": "Federated learning enables resource-constrained edge compute devices, such as mobile phones and IoT devices, to learn a shared model for prediction, while keeping the training data local. This decentralized approach to train models provides privacy, security, regulatory and economic benefits. In this work, we focus on the statistical challenge of federated learning when local data is non-IID. We first show that the accuracy of federated learning reduces significantly, by up to ~55% for neural networks trained for highly skewed non-IID data, where each client device trains only on a single class of data. We further show that this accuracy reduction can be explained by the weight divergence, which can be quantified by the earth mover’s distance (EMD) between the distribution over classes on each device and the population distribution. As a solution, we propose a strategy to improve training on non-IID data by creating a small subset of data which is globally shared between all the edge devices. Experiments show that accuracy can be increased by ~30% for the CIFAR-10 dataset with only 5% globally shared data.",
"title": ""
},
{
"docid": "651d048aaae1ce1608d3d9f0f09d4b9b",
"text": "We investigate here the behavior of the standard k-means clustering algorithm and several alternatives to it: the k-harmonic means algorithm due to Zhang and colleagues, fuzzy k-means, Gaussian expectation-maximization, and two new variants of k-harmonic means. Our aim is to find which aspects of these algorithms contribute to finding good clusterings, as opposed to converging to a low-quality local optimum. We describe each algorithm in a unified framework that introduces separate cluster membership and data weight functions. We then show that the algorithms do behave very differently from each other on simple low-dimensional synthetic datasets and image segmentation tasks, and that the k-harmonic means method is superior. Having a soft membership function is essential for finding high-quality clusterings, but having a non-constant data weight function is useful also.",
"title": ""
},
{
"docid": "90738b84c4db0a267c7213c923368e6a",
"text": "Detecting overlapping communities is essential to analyzing and exploring natural networks such as social networks, biological networks, and citation networks. However, most existing approaches do not scale to the size of networks that we regularly observe in the real world. In this paper, we develop a scalable approach to community detection that discovers overlapping communities in massive real-world networks. Our approach is based on a Bayesian model of networks that allows nodes to participate in multiple communities, and a corresponding algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. We demonstrate how we can discover the hidden community structure of several real-world networks, including 3.7 million US patents, 575,000 physics articles from the arXiv preprint server, and 875,000 connected Web pages from the Internet. Furthermore, we demonstrate on large simulated networks that our algorithm accurately discovers the true community structure. This paper opens the door to using sophisticated statistical models to analyze massive networks.",
"title": ""
},
{
"docid": "d568194d6b856243056c072c96c76115",
"text": "OBJECTIVE\nTo develop an evidence-based guideline to help clinicians make decisions about when and how to safely taper and stop antipsychotics; to focus on the highest level of evidence available and seek input from primary care professionals in the guideline development, review, and endorsement processes.\n\n\nMETHODS\nThe overall team comprised 9 clinicians (1 family physician, 1 family physician specializing in long-term care, 1 geriatric psychiatrist, 2 geriatricians, 4 pharmacists) and a methodologist; members disclosed conflicts of interest. For guideline development, a systematic process was used, including the GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach. Evidence was generated from a Cochrane systematic review of antipsychotic deprescribing trials for the behavioural and psychological symptoms of dementia, and a systematic review was conducted to assess the evidence behind the benefits of using antipsychotics for insomnia. A review of reviews of the harms of continued antipsychotic use was performed, as well as narrative syntheses of patient preferences and resource implications. This evidence and GRADE quality-of-evidence ratings were used to generate recommendations. The team refined guideline content and recommendation wording through consensus and synthesized clinical considerations to address common front-line clinician questions. The draft guideline was distributed to clinicians and stakeholders for review and revisions were made at each stage.\n\n\nRECOMMENDATIONS\nWe recommend deprescribing antipsychotics for adults with behavioural and psychological symptoms of dementia treated for at least 3 months (symptoms stabilized or no response to an adequate trial) and for adults with primary insomnia treated for any duration or secondary insomnia in which underlying comorbidities are managed. A decision-support algorithm was developed to accompany the guideline.\n\n\nCONCLUSION\nAntipsychotics are associated with harms and can be safely tapered. Patients and caregivers might be more amenable to deprescribing if they understand the rationale (potential for harm), are involved in developing the tapering plan, and are offered behavioural advice or management. This guideline provides recommendations for making decisions about when and how to reduce the dose of or stop antipsychotics. Recommendations are meant to assist with, not dictate, decision making in conjunction with patients and families.",
"title": ""
},
{
"docid": "c3a8bbd853667155eee4cfb74692bd0f",
"text": "The contemporary approach to database system architecture requires the complete integration of data into a single, centralized database; while multiple logical databases can be supported by current database management software, techniques for relating these databases are strictly ad hoc. This problem is aggravated by the trend toward networks of small to medium size computer systems, as opposed to large, stand-alone main-frames. Moreover, while current research on distributed databases aims to provide techniques that support the physical distribution of data items in a computer network environment, current approaches require a distributed database to be logically centralized.",
"title": ""
},
{
"docid": "dde5eb29c02f95cbf47bb9a3895d7fd8",
"text": "Text password is the most popular form of user authentication on websites due to its convenience and simplicity. However, users' passwords are prone to be stolen and compromised under different threats and vulnerabilities. Firstly, users often select weak passwords and reuse the same passwords across different websites. Routinely reusing passwords causes a domino effect; when an adversary compromises one password, she will exploit it to gain access to more websites. Second, typing passwords into untrusted computers suffers password thief threat. An adversary can launch several password stealing attacks to snatch passwords, such as phishing, keyloggers and malware. In this paper, we design a user authentication protocol named oPass which leverages a user's cellphone and short message service to thwart password stealing and password reuse attacks. oPass only requires each participating website possesses a unique phone number, and involves a telecommunication service provider in registration and recovery phases. Through oPass, users only need to remember a long-term password for login on all websites. After evaluating the oPass prototype, we believe oPass is efficient and affordable compared with the conventional web authentication mechanisms.",
"title": ""
},
{
"docid": "ce839ea9b5cc8de275b634c920f45329",
"text": "As a matter of fact, most natural structures are complex topology structures with intricate holes or irregular surface morphology. These structures can be used as lightweight infill, porous scaffold, energy absorber or micro-reactor. With the rapid advancement of 3D printing, the complex topology structures can now be efficiently and accurately fabricated by stacking layered materials. The novel manufacturing technology and application background put forward new demands and challenges to the current design methodologies of complex topology structures. In this paper, a brief review on the development of recent complex topology structure design methods was provided; meanwhile, the limitations of existing methods and future work are also discussed in the end.",
"title": ""
},
{
"docid": "97c40f796f104587a465f5d719653181",
"text": "Although some theory suggests that it is impossible to increase one’s subjective well-being (SWB), our ‘sustainable happiness model’ (Lyubomirsky, Sheldon, & Schkade, 2005) specifies conditions under which this may be accomplished. To illustrate the three classes of predictor in the model, we first review research on the demographic/circumstantial, temperament/personality, and intentional/experiential correlates of SWB. We then introduce the sustainable happiness model, which suggests that changing one’s goals and activities in life is the best route to sustainable new SWB. However, the goals and activities must be of certain positive types, must fit one’s personality and needs, must be practiced diligently and successfully, must be varied in their timing and enactment, and must provide a continued stream of fresh positive experiences. Research supporting the model is reviewed, including new research suggesting that happiness intervention effects are not just placebo effects. Everyone wants to be happy. Indeed, happiness may be the ultimate fundamental ‘goal’ that people pursue in their lives (Diener, 2000), a pursuit enshrined as an inalienable right in the US Declaration of Independence. The question of what produces happiness and wellbeing is the subject of a great deal of contemporary research, much of it falling under the rubric of ‘positive psychology’, an emerging field that also considers issues such as what makes for optimal relationships, optimal group functioning, and optimal communities. In this article, we first review some prominent definitions, theories, and research findings in the well-being literature. We then focus in particular on the question of whether it is possible to become lastingly happier in one’s life, drawing from our recent model of sustainable happiness. Finally, we discuss some recent experimental data suggesting that it is indeed possible to boost one’s happiness level, and to sustain that newfound level. A number of possible definitions of happiness exist. Let us start with the three proposed by Ed Diener in his landmark Psychological Bulletin 130 Is It Possible to Become Happier © 2007 The Authors Social and Personality Psychology Compass 1/1 (2007): 129–145, 10.1111/j.1751-9004.2007.00002.x Journal Compilation © 2007 Blackwell Publishing Ltd (1984) article. The first is ‘leading a virtuous life’, in which the person adheres to society’s vision of morality and proper conduct. This definition makes no reference to the person’s feelings or emotions, instead apparently making the implicit assumption that reasonably positive feelings will ensue if the person toes the line. A second definition of happiness involves a cognitive evaluation of life as a whole. Are you content, overall, or would you do things differently given the opportunity? This reflects a personcentered view of happiness, and necessarily taps peoples’ subjective judgments of whether they are satisfied with their lives. A third definition refers to typical moods. Are you typically in a positive mood (i.e., inspired, pleased, excited) or a negative mood (i.e., anxious, upset, depressed)? In this person-centered view, it is the balance of positive to negative mood that matters (Bradburn, 1969). Although many other conceptions of well-being exist (Lyubomirsky & Lepper, 1999; Ryan & Frederick, 1997; Ryff & Singer, 1996), ratings of life satisfaction and judgments of the frequency of positive and negative affect have received the majority of the research attention, illustrating the dominance of the second and third (person-centered) definitions of happiness in the research literature. Notably, positive affect, negative affect, and life satisfaction are presumed to be somewhat distinct. Thus, although life satisfaction typically correlates positively with positive affect and negatively with negative affect, and positive affect typically correlates negatively with negative affect, these correlations are not necessarily strong (and they also vary depending on whether one assesses a particular time or context, or the person’s experience as a whole). The generally modest correlations among the three variables means that an individual high in one indicator is not necessarily high (or low) in any other indicator. For example, a person with many positive moods might also experience many negative moods, and a person with predominantly good moods may or may not be satisfied with his or her life. As a case in point, a college student who has many friends and rewarding social interactions may be experiencing frequent pleasant affect, but, if he doubts that college is the right choice for him, he will be discontent with life. In contrast, a person experiencing many negative moods might nevertheless be satisfied with her life, if she finds her life meaningful or is suffering for a good cause. For example, a frazzled new mother may feel that all her most cherished life goals are being realized, yet she is experiencing a great deal of negative emotions on a daily basis. Still, the three quantities typically go together to an extent such that a comprehensive and reliable subjective well-being (SWB) indicator can be computed by summing positive affect and life satisfaction and subtracting negative affect. Can we trust people’s self-reports of happiness (or unhappiness)? Actually, we must: It would make little sense to claim that a person is happy if he or she does not acknowledge being happy. Still, it is possible to corroborate self-reports of well-being with reports from the respondents’ friends and",
"title": ""
},
{
"docid": "a960ced0cd3859c037c43790a6b8436b",
"text": "Ferroresonance is a widely studied phenomenon but it is still not well understood because of its complex behavior. It is “fuzzy-resonance.” A simple graphical approach using fundamental frequency phasors has been presented to elevate the readers understanding. Its occurrence and how it appears is extremely sensitive to the transformer characteristics, system parameters, transient voltages and initial conditions. More efficient transformer core material has lead to its increased occurrence and it has considerable effects on system apparatus and protection. Power system engineers should strive to recognize potential ferroresonant configurations and design solutions to prevent its occurrence.",
"title": ""
},
{
"docid": "9db388f2564a24f58d8ea185e5b514be",
"text": "Analyzing large volumes of log events without some kind of classification is undoable nowadays due to the large amount of events. Using AI to classify events make these log events usable again. With the use of the Keras Deep Learning API, which supports many Optimizing Stochastic Gradient Decent algorithms, better known as optimizers, this research project tried these algorithms in a Long Short-Term Memory (LSTM) network, which is a variant of the Recurrent Neural Networks. These algorithms have been applied to classify and update event data stored in Elastic-Search. The LSTM network consists of five layers where the output layer is a Dense layer using the Softmax function for evaluating the AI model and making the predictions. The Categorical Cross-Entropy is the algorithm used to calculate the loss. For the same AI model, different optimizers have been used to measure the accuracy and the loss. Adam was found as the best choice with an accuracy of 29,8%.",
"title": ""
},
{
"docid": "b0bb9c4bcf666dca927d4f747bfb1ca1",
"text": "Remote monitoring of animal behaviour in the environment can assist in managing both the animal and its environmental impact. GPS collars which record animal locations with high temporal frequency allow researchers to monitor both animal behaviour and interactions with the environment. These ground-based sensors can be combined with remotely-sensed satellite images to understand animal-landscape interactions. The key to combining these technologies is communication methods such as wireless sensor networks (WSNs). We explore this concept using a case-study from an extensive cattle enterprise in northern Australia and demonstrate the potential for combining GPS collars and satellite images in a WSN to monitor behavioural preferences and social behaviour of cattle.",
"title": ""
},
{
"docid": "3601a56b6c68864da31ac5aaa67bff1a",
"text": "Information asymmetry exists amongst stakeholders in the current food supply chain. Lack of standardization in data format, lack of regulations, and siloed, legacy information systems exasperate the problem. Global agriculture trade is increasing creating a greater need for traceability in the global supply chain. This paper introduces Harvest Network, a theoretical end-to-end, vis a vie “farm-to-fork”, food traceability application integrating the Ethereum blockchain and IoT devices exchanging GS1 message standards. The goal is to create a distributed ledger accessible for all stakeholders in the supply chain. Our design effort creates a basic framework (artefact) for building a prototype or simulation using existing technologies and protocols [1]. The next step is for industry practitioners and researchers to apply AGILE methods for creating working prototypes and advanced projects that bring about greater transparency.",
"title": ""
},
{
"docid": "f92087a8e81c45cd8bedc12fddd682fc",
"text": "This paper presented a novel power conversion method of realizing the galvanic isolation by dual safety capacitors (Y-cap) instead of conventional transformer. With limited capacitance of the Y capacitor, series resonant is proposed to achieve the power transfer. The basic concept is to control the power path impedance, which blocks the dominant low-frequency part of touch current and let the high-frequency power flow freely. Conceptual analysis, simulation and design considerations are mentioned in this paper. An 85W AC/AC prototype is designed and verified to substitute the isolation transformer of a CCFL LCD TV backlight system. Compared with the conventional transformer isolation, the new method is proved to meet the function and safety requirements of its specification while has higher efficiency and smaller size.",
"title": ""
},
{
"docid": "5fde7006ec6f7cf4f945b234157e5791",
"text": "In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.",
"title": ""
},
{
"docid": "2070b05100a92e883252c80666c3dde8",
"text": "Visiting museums and exhibitions represented in multi-user 3D environments can be an efficient way of learning about the exhibits in an interactive manner and socialising with other visitors. The rich educational information presented in the virtual environment and the presence of remote users could also be beneficial for the visitors of the physical exhibition space. In this paper we present the design and implementation of a virtual exhibition that allowed local and remote visitors coexist in the environment, access the interactive content and communicate with each other. The virtual exhibition was accessible to the remote users from the Web and to local visitors through an installation in the physical space. The installation projected the virtual world in the exhibition environment and let users interact with it using a handheld gesture-based device. We performed an evaluation of the 3D environment with the participation of both local and remote visitors. The evaluation results indicate that the virtual world was considered exciting and easy to use by the majority of the participants. Furthermore, according to the evaluation results, virtual museums and exhibitions seem to have significant advantages for remote visitors compared to typical museum web sites, and they can also be an important aid to local visitors and enhance their experience.",
"title": ""
},
{
"docid": "5b6d68984b4f9a6e0f94e0a68768dc8c",
"text": "In this paper, we focus on a major internet problem which is a huge amount of uncategorized text. We review existing techniques used for feature selection and categorization. After reviewing the existing literature, it was found that there exist some gaps in existing algorithms, one of which is a requirement of the labeled dataset for the training of the classifier. Keywords— Bayesian; KNN; PCA; SVM; TF-IDF",
"title": ""
}
] |
scidocsrr
|
467f46c9a94b37b93c02e24ad5f45ec9
|
TurboQuad: A Novel Leg–Wheel Transformable Robot With Smooth and Fast Behavioral Transitions
|
[
{
"docid": "e5da4f6a9abd5f1c751a366768d8456c",
"text": "We report on the design, optimization, and performance evaluation of a new wheel-leg hybrid robot. This robot utilizes a novel transformable wheel that combines the advantages of both circular and legged wheels. To minimize the complexity of the design, the transformation process of the wheel is passive, which eliminates the need for additional actuators. A new triggering mechanism is also employed to increase the transformation success rate. To maximize the climbing ability in legged-wheel mode, the design parameters for the transformable wheel and robot are tuned based on behavioral analyses. The performance of our new development is evaluated in terms of stability, energy efficiency, and the maximum height of an obstacle that the robot can climb over. With the new transformable wheel, the robot can climb over an obstacle 3.25 times as tall as its wheel radius, without compromising its driving ability at a speed of 2.4 body lengths/s with a specific resistance of 0.7 on a flat surface.",
"title": ""
}
] |
[
{
"docid": "b5009853d22801517431f46683b235c2",
"text": "Artificial intelligence (AI) is the study of how to make computers do things which, at the moment, people do better. Thus Strong AI claims that in near future we will be surrounded by such kinds of machine which can completely works like human being and machine could have human level intelligence. One intention of this article is to excite a broader AI audience about abstract algorithmic information theory concepts, and conversely to inform theorists about exciting applications to AI.The science of Artificial Intelligence (AI) might be defined as the construction of intelligent systems and their analysis.",
"title": ""
},
{
"docid": "9611686ff4eedf047460becec43ce59d",
"text": "We propose a novel location-based second-factor authentication solution for modern smartphones. We demonstrate our solution in the context of point of sale transactions and show how it can be effectively used for the detection of fraudulent transactions caused by card theft or counterfeiting. Our scheme makes use of Trusted Execution Environments (TEEs), such as ARM TrustZone, commonly available on modern smartphones, and resists strong attackers, even those capable of compromising the victim phone applications and OS. It does not require any changes in the user behavior at the point of sale or to the deployed terminals. In particular, we show that practical deployment of smartphone-based second-factor authentication requires a secure enrollment phase that binds the user to his smartphone TEE and allows convenient device migration. We then propose two novel enrollment schemes that resist targeted attacks and provide easy migration. We implement our solution within available platforms and show that it is indeed realizable, can be deployed with small software changes, and does not hinder user experience.",
"title": ""
},
{
"docid": "36f31dea196f2d7a74bc442f1c184024",
"text": "The causes of Parkinson's disease (PD), the second most common neurodegenerative disorder, are still largely unknown. Current thinking is that major gene mutations cause only a small proportion of all cases and that in most cases, non-genetic factors play a part, probably in interaction with susceptibility genes. Numerous epidemiological studies have been done to identify such non-genetic risk factors, but most were small and methodologically limited. Larger, well-designed prospective cohort studies have only recently reached a stage at which they have enough incident patients and person-years of follow-up to investigate possible risk factors and their interactions. In this article, we review what is known about the prevalence, incidence, risk factors, and prognosis of PD from epidemiological studies.",
"title": ""
},
{
"docid": "b188f936fb618e84a9d93343778a2adc",
"text": "Face multi-attribute prediction benefits substantially from multi-task learning (MTL), which learns multiple face attributes simultaneously to achieve shared or mutually related representations of different attributes. The most widely used MTL convolutional neural network is heuristically or empirically designed by sharing all of the convolutional layers and splitting at the fully connected layers for task-specific losses. However, it is improper to view all low and midlevel features for different attributes as being the same, especially when these attributes are only loosely related. In this paper, we propose a novel multi-attribute tensor correlation neural network (MTCN) for face attribute prediction. The structure shares the information in low-level features (e.g., the first two convolutional layers) but splits that in high-level features (e.g., from the third convolutional layer to the fully connected layer). At the same time, during high-level feature extraction, each subnetwork (e.g., AgeNet, Gender-Net, ..., and Smile-Net) excavates closely related features from other networks to enhance its features. Then, we project the features of the C9 layers of the finetuned subnetworks into a highly correlated space by using a novel tensor correlation analysis algorithm (NTCCA). The final face attribute prediction is made based on the correlation matrix. Experimental results on benchmarks with multiple face attributes (CelebA and LFWA) show that the proposed approach has superior performance compared to state-of-the-art methods.",
"title": ""
},
{
"docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d",
"text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.",
"title": ""
},
{
"docid": "7a18b4e266cb353e523addfacbdf5bdf",
"text": "The field of image composition is constantly trying to improve the ways in which an image can be altered and enhanced. While this is usually done in the name of aesthetics and practicality, it also provides tools that can be used to maliciously alter images. In this sense, the field of digital image forensics has to be prepared to deal with the influx of new technology, in a constant arms-race. In this paper, the current state of this armsrace is analyzed, surveying the state-of-the-art and providing means to compare both sides. A novel scale to classify image forensics assessments is proposed, and experiments are performed to test composition techniques in regards to different forensics traces. We show that even though research in forensics seems unaware of the advanced forms of image composition, it possesses the basic tools to detect it.",
"title": ""
},
{
"docid": "88f60c6835fed23e12c56fba618ff931",
"text": "Design of fault tolerant systems is a popular subject in flight control system design. In particular, adaptive control approach has been successful in recovering aircraft in a wide variety of different actuator/sensor failure scenarios. However, if the aircraft goes under a severe actuator failure, control system might not be able to adapt fast enough to changes in the dynamics, which would result in performance degradation or even loss of the aircraft. Inspired by the recent success of deep learning applications, this work builds a hybrid recurren-t/convolutional neural network model to estimate adaptation parameters for aircraft dynamics under actuator/engine faults. The model is trained offline from a database of different failure scenarios. In case of an actuator/engine failure, the model identifies adaptation parameters and feeds this information to the adaptive control system, which results in significantly faster convergence of the controller coefficients. Developed control system is implemented on a nonlinear 6-DOF F-16 aircraft, and the results show that the proposed architecture is especially beneficial in severe failure scenarios.",
"title": ""
},
{
"docid": "38d791ebe063bd58a04afd21e6d8f25a",
"text": "The design of a Web search evaluation metric is closely related with how the user's interaction process is modeled. Each behavioral model results in a different metric used to evaluate search performance. In these models and the user behavior assumptions behind them, when a user ends a search session is one of the prime concerns because it is highly related to both benefit and cost estimation. Existing metric design usually adopts some simplified criteria to decide the stopping time point: (1) upper limit for benefit (e.g. RR, AP); (2) upper limit for cost (e.g. Precision@N, DCG@N). However, in many practical search sessions (e.g. exploratory search), the stopping criterion is more complex than the simplified case. Analyzing benefit and cost of actual users' search sessions, we find that the stopping criteria vary with search tasks and are usually combination effects of both benefit and cost factors. Inspired by a popular computer game named Bejeweled, we propose a Bejeweled Player Model (BPM) to simulate users' search interaction processes and evaluate their search performances. In the BPM, a user stops when he/she either has found sufficient useful information or has no more patience to continue. Given this assumption, a new evaluation framework based on upper limits (either fixed or changeable as search proceeds) for both benefit and cost is proposed. We show how to derive a new metric from the framework and demonstrate that it can be adopted to revise traditional metrics like Discounted Cumulative Gain (DCG), Expected Reciprocal Rank (ERR) and Average Precision (AP). To show effectiveness of the proposed framework, we compare it with a number of existing metrics in terms of correlation between user satisfaction and the metrics based on a dataset that collects users' explicit satisfaction feedbacks and assessors' relevance judgements. Experiment results show that the framework is better correlated with user satisfaction feedbacks.",
"title": ""
},
{
"docid": "45d6863e54b343d7a081e79c84b81e65",
"text": "In order to obtain optimal 3D structure and viewing parameter estimates, bundle adjustment is often used as the last step of feature-based structure and motion estimation algorithms. Bundle adjustment involves the formulation of a large scale, yet sparse minimization problem, which is traditionally solved using a sparse variant of the Levenberg-Marquardt optimization algorithm that avoids storing and operating on zero entries. This paper argues that considerable computational benefits can be gained by substituting the sparse Levenberg-Marquardt algorithm in the implementation of bundle adjustment with a sparse variant of Powell's dog leg non-linear least squares technique. Detailed comparative experimental results provide strong evidence supporting this claim",
"title": ""
},
{
"docid": "de2bbd675430ffcb490f090f8baec98d",
"text": "In this letter, we analyze the electromagnetic characteristic of a frequency selective surface (FSS) radome using the physical optics (PO) method and ray tracing technique. We consider the cross-loop slot FSS and the tangent-ogive radome. Radiation pattern of the FSS radome is computed to illustrate the electromagnetic transmission characteristic.",
"title": ""
},
{
"docid": "961372a5e1b21053894040a11e946c8d",
"text": "The main purpose of this paper is to introduce an approach to design a DC-DC boost converter with constant output voltage for grid connected photovoltaic application system. The boost converter is designed to step up a fluctuating solar panel voltage to a higher constant DC voltage. It uses voltage feedback to keep the output voltage constant. To do so, a microcontroller is used as the heart of the control system which it tracks and provides pulse-width-modulation signal to control power electronic device in boost converter. The boost converter will be able to direct couple with grid-tied inverter for grid connected photovoltaic system. Simulations were performed to describe the proposed design. Experimental works were carried out with the designed boost converter which has a power rating of 100 W and 24 V output voltage operated in continuous conduction mode at 20 kHz switching frequency. The test results show that the proposed design exhibits a good performance.",
"title": ""
},
{
"docid": "1c576cf604526b448f0264f2c39f705a",
"text": "This paper introduces a high-security post-quantum stateless hash-based signature scheme that signs hundreds of messages per second on a modern 4-core 3.5GHz Intel CPU. Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB. The signature scheme is designed to provide long-term 2 security even against attackers equipped with quantum computers. Unlike most hash-based designs, this signature scheme is stateless, allowing it to be a drop-in replacement for current signature schemes.",
"title": ""
},
{
"docid": "4e93ce8e5a6175dd558954e560d7ddc2",
"text": "This paper presents a new type of narrow band filter with good electrical performance and manufacturing flexibility, based on the newly introduced groove gap waveguide technology. The designed third and fifth-order filters work at Ku band with 1% fractional bandwidth. These filter structures are manufactured with an allowable gap between two metal blocks, in such a way that there is no requirement for electrical contact and alignment between the blocks. This is a major manufacturing advantage compared to normal rectangular waveguide filters. The measured results of the manufactured filters show reasonably good agreement with the full-wave simulated results, without any tuning or adjustments.",
"title": ""
},
{
"docid": "59ac2e47ed0824eeba1621673f2dccf5",
"text": "In this paper we present a framework for grasp planning with a humanoid robot arm and a five-fingered hand. The aim is to provide the humanoid robot with the ability of grasping objects that appear in a kitchen environment. Our approach is based on the use of an object model database that contains the description of all the objects that can appear in the robot workspace. This database is completed with two modules that make use of this object representation: an exhaustive offline grasp analysis system and a real-time stereo vision system. The offline grasp analysis system determines the best grasp for the objects by employing a simulation system, together with CAD models of the objects and the five-fingered hand. The results of this analysis are added to the object database using a description suited to the requirements of the grasp execution modules. A stereo camera system is used for a real-time object localization using a combination of appearance-based and model-based methods. The different components are integrated in a controller architecture to achieve manipulation task goals for the humanoid robot",
"title": ""
},
{
"docid": "e81cffe3f2f716520ede92d482ddab34",
"text": "An active research trend is to exploit the consensus mechanism of cryptocurrencies to secure the execution of distributed applications. In particular, some recent works have proposed fair lotteries which work on Bitcoin. These protocols, however, require a deposit from each player which grows quadratically with the number of players. We propose a fair lottery on Bitcoin which only requires a constant deposit.",
"title": ""
},
{
"docid": "c757e54a14beec3b4930ad050a16d311",
"text": "The University Class Scheduling Problem (UCSP) is concerned with assigning a number of courses to classrooms taking into consideration constraints like classroom capacities and university regulations. The problem also attempts to optimize the performance criteria and distribute the courses fairly to classrooms depending on the ratio of classroom capacities to course enrollments. The problem is a classical scheduling problem and considered to be NP-complete. It has received some research during the past few years given its wide use in colleges and universities. Several formulations and algorithms have been proposed to solve scheduling problems, most of which are based on local search techniques. In this paper, we propose a complete approach using integer linear programming (ILP) to solve the problem. The ILP model of interest is developed and solved using the three advanced ILP solvers based on generic algorithms and Boolean Satisfiability (SAT) techniques. SAT has been heavily researched in the past few years and has lead to the development of powerful 0-1 ILP solvers that can compete with the best available generic ILP solvers. Experimental results indicate that the proposed model is tractable for reasonable-sized UCSP problems. Index Terms — University Class Scheduling, Optimization, Integer Linear Programming (ILP), Boolean Satisfiability.",
"title": ""
},
{
"docid": "02b6bcef39a21b14ce327f3dc9671fef",
"text": "We've all heard tales of multimillion dollar mistakes that somehow ran off course. Are software projects that risky or do managers need to take a fresh approach when preparing for such critical expeditions? Software projects are notoriously difficult to manage and too many of them end in failure. In 1995, annual U.S. spending on software projects reached approximately $250 billion and encompassed an estimated 175,000 projects [6]. Despite the costs involved, press reports suggest that project failures are occurring with alarming frequency. In 1995, U.S companies alone spent an estimated $59 billion in cost overruns on IS projects and another $81 billion on canceled software projects [6]. One explanation for the high failure rate is that managers are not taking prudent measures to assess and manage the risks involved in these projects. is Advocates of software project risk management claim that by countering these threats to success, the incidence of failure can be reduced [4, 5]. Before we can develop meaningful risk management strategies, however, we must identify these risks. Furthermore, the relative importance of these risks needs to be established, along with some understanding as to why certain risks are perceived to be more important than others. This is necessary so that managerial attention can be focused on the areas that constitute the greatest threats. Finally, identified risks must be classified in a way that suggests meaningful risk mitigation strategies. Here, we report the results of a Delphi study in which experienced software project managers identified and ranked the most important risks. The study led not only to the identification of risk factors and their relative importance, but also to novel insights into why project managers might view certain risks as being more important than others. Based on these insights, we introduce a framework for classifying software project risks and discuss appropriate strategies for managing each type of risk. Since the 1970s, both academics and practitioners have written about risks associated with managing software projects [1, 2, 4, 5, 7, 8]. Unfortunately , much of what has been written on risk is based either on anecdotal evidence or on studies limited to a narrow portion of the development process. Moreover, no systematic attempts have been made to identify software project risks by tapping the opinions of those who actually have experience in managing such projects. With a few exceptions [3, 8], there has been little attempt to understand the …",
"title": ""
},
{
"docid": "79811b3cfec543470941e9529dc0ab24",
"text": "We present a novel method for learning and predicting the affordances of an object based on its physical and visual attributes. Affordance prediction is a key task in autonomous robot learning, as it allows a robot to reason about the actions it can perform in order to accomplish its goals. Previous approaches to affordance prediction have either learned direct mappings from visual features to affordances, or have introduced object categories as an intermediate representation. In this paper, we argue that physical and visual attributes provide a more appropriate mid-level representation for affordance prediction, because they support informationsharing between affordances and objects, resulting in superior generalization performance. In particular, affordances are more likely to be correlated with the attributes of an object than they are with its visual appearance or a linguistically-derived object category. We provide preliminary validation of our method experimentally, and present empirical comparisons to both the direct and category-based approaches of affordance prediction. Our encouraging results suggest the promise of the attributebased approach to affordance prediction.",
"title": ""
},
{
"docid": "257d1de3b45533ca49e0a78ba55c841e",
"text": "Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as “algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.” This “human-in-the-loop” can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.",
"title": ""
}
] |
scidocsrr
|
5369e0ec52989c5b78e198d934e603b1
|
DAAL: Deep activation-based attribute learning for action recognition in depth videos
|
[
{
"docid": "695af0109c538ca04acff8600d6604d4",
"text": "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.",
"title": ""
},
{
"docid": "d529b723bbba3182d02a0104d4418c6d",
"text": "Learning the spatial-temporal representation of motion information is crucial to human action recognition. Nevertheless, most of the existing features or descriptors cannot capture motion information effectively, especially for long-term motion. To address this problem, this paper proposes a long-term motion descriptor called sequential deep trajectory descriptor (sDTD). Specifically, we project dense trajectories into two-dimensional planes, and subsequently a CNN-RNN network is employed to learn an effective representation for long-term motion. Unlike the popular two-stream ConvNets, the sDTD stream is introduced into a three-stream framework so as to identify actions from a video sequence. Consequently, this three-stream framework can simultaneously capture static spatial features, short-term motion, and long-term motion in the video. Extensive experiments were conducted on three challenging datasets: KTH, HMDB51, and UCF101. Experimental results show that our method achieves state-of-the-art performance on the KTH and UCF101 datasets, and is comparable to the state-of-the-art methods on the HMDB51 dataset.",
"title": ""
},
{
"docid": "8a19befe72e06f2adaf58a575ac16cdb",
"text": "Single modality action recognition on RGB or depth sequences has been extensively explored recently. It is generally accepted that each of these two modalities has different strengths and limitations for the task of action recognition. Therefore, analysis of the RGB+D videos can help us to better study the complementary properties of these two types of modalities and achieve higher levels of performance. In this paper, we propose a new deep autoencoder based shared-specific feature factorization network to separate input multimodal signals into a hierarchy of components. Further, based on the structure of the features, a structured sparsity learning machine is proposed which utilizes mixed norms to apply regularization within components and group selection between them for better classification performance. Our experimental results show the effectiveness of our cross-modality feature analysis framework by achieving state-of-the-art accuracy for action classification on five challenging benchmark datasets.",
"title": ""
},
{
"docid": "d1c84b1131f8cb2abbbb0383c83bc0d2",
"text": "Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms.",
"title": ""
}
] |
[
{
"docid": "f7a69acbc2766e990cbd4f3c9b4124d1",
"text": "This paper aims at assisting empirical researchers benefit from recent advances in causal inference. The paper stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, and the conditional nature of causal claims inferred from nonexperimental studies. These emphases are illustrated through a brief survey of recent results, including the control of confounding, the assessment of causal effects, the interpretation of counterfactuals, and a symbiosis between counterfactual and graphical methods of analysis.",
"title": ""
},
{
"docid": "a14656cc178eeffb5327c74649fdb456",
"text": "White light emitting diode (LED) with high brightness has attracted a lot of attention from both industry and academia for its high efficiency, ease to drive, environmental friendliness, and long lifespan. They become possible applications to replace the incandescent bulbs and fluorescent lamps in residential, industrial and commercial lighting. The realization of this new lighting source requires both tight LED voltage regulation and high power factor as well. This paper proposed a single-stage flyback converter for the LED lighting applications and input power factor correction. A type-II compensator has been inserted in the voltage loop providing sufficient bandwidth and stable phase margin. The flyback converter is controlled with voltage mode pulse width modulation (PWM) and run in discontinuous conduction mode (DCM) so that the inductor current follows the rectified input voltage, resulting in high power factor. A prototype topology of closed-loop, single-stage flyback converter for LED driver circuit designed for an 18W LED lighting source is constructed and tested to verify the theoretical predictions. The measured performance of the LED lighting fixture can achieve a high power factor greater than 0.998 and a low total harmonic distortion less than 5.0%. Experimental results show the functionality of the overall system and prove it to be an effective solution for the new lighting applications.",
"title": ""
},
{
"docid": "bc92aa05e989ead172274b4558aa4443",
"text": "A recent video coding standard, called High Efficiency Video Coding (HEVC), adopts two in-loop filters for coding efficiency improvement where the in-loop filtering is done by a de-blocking filter (DF) followed by sample adaptive offset (SAO) filtering. The DF helps improve both coding efficiency and subjective quality without signaling any bit to decoder sides while SAO filtering corrects the quantization errors by sending offset values to decoders. In this paper, we first present a new in-loop filtering technique using convolutional neural networks (CNN), called IFCNN, for coding efficiency and subjective visual quality improvement. The IFCNN does not require signaling bits by using the same trained weights in both encoders and decoder. The proposed IFCNN is trained in two different QP ranges: QR1 from QP = 20 to QP = 29; and QR2 from QP = 30 to QP = 39. In testing, the IFCNN trained in QR1 is applied for the encoding/decoding with QP values less than 30 while the IFCNN trained in QR2 is applied for the case of QP values greater than 29. The experiment results show that the proposed IFCNN outperforms the HEVC reference mode (HM) with average 1.9%-2.8% gain in BD-rate for Low Delay configuration, and average 1.6%-2.6% gain in BD-rate for Random Access configuration with IDR period 16.",
"title": ""
},
{
"docid": "9b430645f7b0da19b2c55d43985259d8",
"text": "Research on human spatial memory and navigational ability has recently shown the strong influence of reference systems in spatial memory on the ways spatial information is accessed in navigation and other spatially oriented tasks. One of the main findings can be characterized as a large cognitive cost, both in terms of speed and accuracy that occurs whenever the reference system used to encode spatial information in memory is not aligned with the reference system required by a particular task. In this paper, the role of aligned and misaligned reference systems is discussed in the context of the built environment and modern architecture. The role of architectural design on the perception and mental representation of space by humans is investigated. The navigability and usability of built space is systematically analysed in the light of cognitive theories of spatial and navigational abilities of humans. It is concluded that a building’s navigability and related wayfinding issues can benefit from architectural design that takes into account basic results of spatial cognition research. 1 Wayfinding and Architecture Life takes place in space and humans, like other organisms, have developed adaptive strategies to find their way around their environment. Tasks such as identifying a place or direction, retracing one’s path, or navigating a large-scale space, are essential elements to mobile organisms. Most of these spatial abilities have evolved in natural environments over a very long time, using properties present in nature as cues for spatial orientation and wayfinding. With the rise of complex social structure and culture, humans began to modify their natural environment to better fit their needs. The emergence of primitive dwellings mainly provided shelter, but at the same time allowed builders to create environments whose spatial structure “regulated” the chaotic natural environment. They did this by using basic measurements and geometric relations, such as straight lines, right angles, etc., as the basic elements of design (Le Corbusier, 1931, p. 69ff.) In modern society, most of our lives take place in similar regulated, human-made spatial environments, with paths, tracks, streets, and hallways as the main arteries of human locomotion. Architecture and landscape architecture embody the human effort to structure space in meaningful and useful ways. Architectural design of space has multiple functions. Architecture is designed to satisfy the different representational, functional, aesthetic, and emotional needs of organizations and the people who live or work in these structures. In this chapter, emphasis lies on a specific functional aspect of architectural design: human wayfinding. Many approaches to improving architecture focus on functional issues, like improved ecological design, the creation of improved workplaces, better climate control, lighting conditions, or social meeting areas. Similarly, when focusing on the mobility of humans, the ease of wayfinding within a building can be seen as an essential function of a building’s design (Arthur & Passini, 1992; Passini, 1984). When focusing on wayfinding issues in buildings, cities, and landscapes, the designed spatial environment can be seen as an important tool in achieving a particular goal, e.g., reaching a destination or finding an exit in case of emergency. This view, if taken to a literal extreme, is summarized by Le Corbusier’s (1931) notion of the building as a “machine,” mirroring in architecture the engineering ideals of efficiency and functionality found in airplanes and cars. In the narrow sense of wayfinding, a building thus can be considered of good design if it allows easy and error-free navigation. This view is also adopted by Passini (1984), who states that “although the architecture and the spatial configuration of a building generate the wayfinding problems people have to solve, they are also a wayfinding support system in that they contain the information necessary to solve the problem” (p. 110). Like other problems of engineering, the wayfinding problem in architecture should have one or more solutions that can be evaluated. This view of architecture can be contrasted with the alternative view of architecture as “built philosophy”. According to this latter view, architecture, like art, expresses ideas and cultural progress by shaping the spatial structure of the world – a view which gives consideration to the users as part of the philosophical approach but not necessarily from a usability perspective. Viewing wayfinding within the built environment as a “man-machine-interaction” problem makes clear that good architectural design with respect to navigability needs to take two factors into account. First, the human user comes equipped with particular sensory, perceptual, motoric, and cognitive abilities. Knowledge of these abilities and the limitations of an average user or special user populations thus is a prerequisite for good design. Second, structural, functional, financial, and other design considerations restrict the degrees of freedom architects have in designing usable spaces. In the following sections, we first focus on basic research on human spatial cognition. Even though not all of it is directly applicable to architectural design and wayfinding, it lays the foundation for more specific analyses in part 3 and 4. In part 3, the emphasis is on a specific research question that recently has attracted some attention: the role of environmental structure (e.g., building and street layout) for the selection of a spatial reference frame. In part 4, implications for architectural design are discussed by means of two real-world examples. 2 The human user in wayfinding 2.1 Navigational strategies Finding one’s way in the environment, reaching a destination, or remembering the location of relevant objects are some of the elementary tasks of human activity. Fortunately, human navigators are well equipped with an array of flexible navigational strategies, which usually enable them to master their spatial environment (Allen, 1999). In addition, human navigation can rely on tools that extend human sensory and mnemonic abilities. Most spatial or navigational strategies are so common that they do not occur to us when we perform them. Walking down a hallway we hardly realize that the optical and acoustical flows give us rich information about where we are headed and whether we will collide with other objects (Gibson, 1979). Our perception of other objects already includes physical and social models on how they will move and where they will be once we reach the point where paths might cross. Following a path can consist of following a particular visual texture (e.g., asphalt) or feeling a handrail in the dark by touch. At places where multiple continuing paths are possible, we might have learned to associate the scene with a particular action (e.g., turn left; Schölkopf & Mallot, 1995), or we might try to approximate a heading direction by choosing the path that most closely resembles this direction. When in doubt about our path we might ask another person or consult a map. As is evident from this brief (and not exhaustive) description, navigational strategies and activities are rich in diversity and adaptability (for an overview see Golledge, 1999; Werner, Krieg-Brückner, & Herrmann, 2000), some of which are aided by architectural design and signage (see Arthur & Passini, 1992; Passini, 1984). Despite the large number of different navigational strategies, people still experience problems finding their way or even feel lost momentarily. This feeling of being lost might reflect the lack of a key component of human wayfinding: knowledge about where one is located in an environment – with respect to one’s goal, one’s starting location, or with respect to the global environment one is in. As Lynch put it, “the terror of being lost comes from the necessity that a mobile organism be oriented in its surroundings” (1960, p. 125.) Some wayfinding strategies, like vector navigation, rely heavily on this information. Other strategies, e.g. piloting or path-following, which are based on purely local information can benefit from even vague locational knowledge as a redundant source of information to validate or question navigational decisions (see Werner et al., 2000, for examples.) Proficient signage in buildings, on the other hand, relies on a different strategy. It relieves a user from keeping track of his or her position in space by indicating the correct navigational choice whenever the choice becomes relevant. Keeping track of one’s position during navigation can be done quite easily if access to global landmarks, reference directions, or coordinates is possible. Unfortunately, the built environment often does not allow for simple navigational strategies based on these types of information. Instead, spatial information has to be integrated across multiple places, paths, turns, and extended periods of time (see Poucet, 1993, for an interesting model of how this can be achieved). In the next section we will describe an essential ingredient of this integration – the mental representation of spatial information in memory. 2.2 Alignment effects in spatial memory When observing tourists in an unfamiliar environment, one often notices people frantically turning maps to align the noticeable landmarks depicted in the map with the visible landmarks as seen from the viewpoint of the tourist. This type of behavior indicates a well-established cognitive principle (Levine, Jankovic, & Palij, 1982). Observers more easily comprehend and use information depicted in “You-are-here” (YAH) maps if the up-down direction of the map coincides with the front-back direction of the observer. In this situation, the natural preference of directional mapping of top to front and bottom to back is used, and left and right in the map stay left and right in the depicted world. While th",
"title": ""
},
{
"docid": "6c2a033b374b4318cd94f0a617ec705a",
"text": "In this paper, we propose to use Deep Neural Net (DNN), which has been recently shown to reduce speech recognition errors significantly, in Computer-Aided Language Learning (CALL) to evaluate English learners’ pronunciations. Multi-layer, stacked Restricted Boltzman Machines (RBMs), are first trained as nonlinear basis functions to represent speech signals succinctly, and the output layer is discriminatively trained to optimize the posterior probabilities of correct, sub-phonemic “senone” states. Three Goodness of Pronunciation (GOP) scores, including: the likelihood-based posterior probability, averaged framelevel posteriors of the DNN output layer “senone” nodes, and log likelihood ratio of correct and competing models, are tested with recordings of both native and non-native speakers, along with manual grading of pronunciation quality. The experimental results show that the GOP estimated by averaged frame-level posteriors of “senones” correlate with human scores the best. Comparing with GOPs estimated with non-DNN, i.e. GMMHMM, based models, the new approach can improve the correlations relatively by 22.0% or 15.6%, at word or sentence levels, respectively. In addition, the frame-level posteriors, which doesn’t need a decoding lattice and its corresponding forwardbackward computations, is suitable for supporting fast, on-line, multi-channel applications.",
"title": ""
},
{
"docid": "90ba220babb8030d1c400352dfde6473",
"text": "Localization and navigation are fundamental issues to autonomous mobile robotics. In the case of the environmental map has been built, the traditional two-dimensional (2D) lidar localization and navigation system can't be matched to the initial position of the robot in dynamic environment and will be unreliable when kidnapping occurs. Moreover, it relies on high-cost lidar for high accuracy and long range. In view of this, the paper presents a low cost navigation system based on a low cost lidar and a cheap webcam. In this approach, 2D-codes are attached to the ceiling, to provide reference points to aid the indoor robot localization. The mobile robot is equipped with webcam pointing to the ceiling to identify 2D-codes. On the other hand, a low-cost 2D laser scanner is applied to build a map in unknown environment and detect obstacles. Adaptive Monte Carlo Localization (AMCL) is implements for lidar positioning, A* and Dynamic Window Approach (DWA) are applied in path planning based on a 2D grid map. The error analysis and experiments has validated the proposed method.",
"title": ""
},
{
"docid": "dbc57902c0655f1bdb3f7dbdcdb6fd5c",
"text": "In this paper, a progressive learning technique for multi-class classification is proposed. This newly developed learning technique is independent of the number of class constraints and it can learn new classes while still retaining the knowledge of previous classes. Whenever a new class (non-native to the knowledge learnt thus far) is encountered, the neural network structure gets remodeled automatically by facilitating new neurons and interconnections, and the parameters are calculated in such a way that it retains the knowledge learnt thus far. This technique is suitable for realworld applications where the number of classes is often unknown and online learning from real-time data is required. The consistency and the complexity of the progressive learning technique are analyzed. Several standard datasets are used to evaluate the performance of the developed technique. A comparative study shows that the developed technique is superior. Key Words—Classification, machine learning, multi-class, sequential learning, progressive learning.",
"title": ""
},
{
"docid": "cebc36cd572740069ab22e8181c405c4",
"text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.",
"title": ""
},
{
"docid": "d9f0f36e75c08d2c3097e85d8c2dec36",
"text": "Social software solutions in enterprises such as IBM Connections are said to have the potential to support communication and collaboration among employees. However, companies are faced to manage the adoption of such collaborative tools and therefore need to raise the employees’ acceptance and motivation. To solve these problems, developers started to implement Gamification elements in social software tools, which aim to increase users’ motivation. In this research-in-progress paper, we give first insights and critically examine the current market of leading social software solutions to find out which Gamification approaches are implementated in such collaborative tools. Our findings show, that most of the major social collaboration solutions do not offer Gamification features by default, but leave the integration to a various number of third party plug-in vendors. Furthermore we identify a trend in which Gamification solutions majorly focus on rewarding quantitative improvement of work activities, neglecting qualitative performance. Subsequently, current solutions do not match recent findings in research and ignore risks that can lower the employees’ motivation and work performance in the long run.",
"title": ""
},
{
"docid": "d1475e197b300489acedf8c0cbe8f182",
"text": "—The publication of IEC 61850-90-1 \" Use of IEC 61850 for the communication between substations \" and the draft of IEC 61850-90-5 \" Use of IEC 61850 to transmit synchrophasor information \" opened the possibility to study IEC 61850 GOOSE Message over WAN not only in the layer 2 (link layer) but also in the layer 3 (network layer) in the OSI model. In this paper we examine different possibilities to make feasible teleprotection in the network layer over WAN sharing the communication channel with automation, management and maintenance convergence services among electrical energy substations.",
"title": ""
},
{
"docid": "d27ed8fd2acd0dad6436b7e98853239d",
"text": "a r t i c l e i n f o What are the psychological mechanisms that trigger habits in daily life? Two studies reveal that strong habits are influenced by context cues associated with past performance (e.g., locations) but are relatively unaffected by current goals. Specifically, performance contexts—but not goals—automatically triggered strongly habitual behaviors in memory (Experiment 1) and triggered overt habit performance (Experiment 2). Nonetheless, habits sometimes appear to be linked to goals because people self-perceive their habits to be guided by goals. Furthermore, habits of moderate strength are automatically influenced by goals, yielding a curvilinear, U-shaped relation between habit strength and actual goal influence. Thus, research that taps self-perceptions or moderately strong habits may find habits to be linked to goals. Introduction Having cast off the strictures of behaviorism, psychologists are showing renewed interest in the psychological processes that guide This interest is fueled partly by the recognition that automaticity is not a unitary construct. Hence, different kinds of automatic responses may be triggered and controlled in different ways (Bargh, 1994; Moors & De Houwer, 2006). However, the field has not yet converged on a common understanding of the psychological mechanisms that underlie habits. Habits can be defined as psychological dispositions to repeat past behavior. They are acquired gradually as people repeatedly respond in a recurring context (e.g., performance settings, action sequences, Wood & Neal, 2007, 2009). Most researchers agree that habits often originate in goal pursuit, given that people are likely to repeat actions that are rewarding or yield desired outcomes. In addition, habit strength is a continuum, with habits of weak and moderate strength performed with lower frequency and/or in more variable contexts than strong habits This consensus aside, it remains unclear how goals and context cues influence habit automaticity. Goals are motivational states that (a) define a valued outcome that (b) energizes and directs action (e.g., the goal of getting an A in class energizes late night studying; Förster, Liberman, & Friedman, 2007). In contrast, context cues for habits reflect features of the performance environment in which the response typically occurs (e.g., the college library as a setting for late night studying). Some prior research indicates that habits are activated automatically by goals (e.g., Aarts & Dijksterhuis, 2000), whereas others indicate that habits are activated directly by context cues, with minimal influence of goals In the present experiments, we first test the cognitive associations …",
"title": ""
},
{
"docid": "902655db97a2f00a346ffda3694d01f3",
"text": "In this paper, we propose a new pipeline of word embedding for unsegmented languages, called segmentation-free word embedding, which does not require word segmentation as a preprocessing step. Unlike space-delimited languages, unsegmented languages, such as Chinese and Japanese, require word segmentation as a preprocessing step. However, word segmentation, that often requires manually annotated resources, is difficult and expensive, and unavoidable errors in word segmentation affect downstream tasks. To avoid these problems in learning word vectors of unsegmented languages, we consider word co-occurrence statistics over all possible candidates of segmentations based on frequent character n-grams instead of segmented sentences provided by conventional word segmenters. Our experiments of noun category prediction tasks on raw Twitter, Weibo, and Wikipedia corpora show that the proposed method outperforms the conventional approaches that require word segmenters.",
"title": ""
},
{
"docid": "39598533576bdd3fa94df5a6967b9b2d",
"text": "Genetic Algorithm (GA) and other Evolutionary Algorithms (EAs) have been successfully applied to solve constrained minimum spanning tree (MST) problems of the communication network design and also have been used extensively in a wide variety of communication network design problems. Choosing an appropriate representation of candidate solutions to the problem is the essential issue for applying GAs to solve real world network design problems, since the encoding and the interaction of the encoding with the crossover and mutation operators have strongly influence on the success of GAs. In this paper, we investigate a new encoding crossover and mutation operators on the performance of GAs to design of minimum spanning tree problem. Based on the performance analysis of these encoding methods in GAs, we improve predecessor-based encoding, in which initialization depends on an underlying random spanning-tree algorithm. The proposed crossover and mutation operators offer locality, heritability, and computational efficiency. We compare with the approach to others that encode candidate spanning trees via the Pr?fer number-based encoding, edge set-based encoding, and demonstrate better results on larger instances for the communication spanning tree design problems. key words: minimum spanning tree (MST), communication network design, genetic algorithm (GA), node-based encoding",
"title": ""
},
{
"docid": "bb253cee8f3b8de7c90e09ef878434f3",
"text": "Under most widely-used security mechanisms the programs users run possess more authority than is strictly necessary, with each process typically capable of utilising all of the user’s privileges. Consequently such security mechanisms often fail to protect against contemporary threats, such as previously unknown (‘zero-day’) malware and software vulnerabilities, as processes can misuse a user’s privileges to behave maliciously. Application restrictions and sandboxes can mitigate threats that traditional approaches to access control fail to prevent by limiting the authority granted to each process. This developing field has become an active area of research, and a variety of solutions have been proposed. However, despite the seriousness of the problem and the security advantages these schemes provide, practical obstacles have restricted their adoption. This paper describes the motivation for application restrictions and sandboxes, presenting an indepth review of the literature covering existing systems. This is the most comprehensive review of the field to date. The paper outlines the broad categories of existing application-oriented access control schemes, such as isolation and rule-based schemes, and discusses their limitations. Adoption of these schemes has arguably been impeded by workflow, policy complexity, and usability issues. The paper concludes with a discussion on areas for future work, and points a way forward within this developing field of research with recommendations for usability and abstraction to be considered to a further extent when designing application-oriented access",
"title": ""
},
{
"docid": "aa362363d6e4b48f7d0b50b02f35a8a2",
"text": "In this paper, we mainly adopt the voting combination method to implement the incremental learning for SVM. The incremental learning algorithm proposed by this paper has contained two parts in order to tackle different types of incremental learning cases, the first part is to deal with the on-line incremental learning and the second part is to deal with the batch incremental learning. In the final, we make the experiment to verify the validity and efficiency of such algorithm.",
"title": ""
},
{
"docid": "883d79eac056314ae45feca23d79c3e3",
"text": "Our life is characterized by the presence of a multitude of interactive devices and smart objects exploited for disparate goals in different contexts of use. Thus, it is impossible for application developers to predict at design time the devices and objects users will exploit, how they will be arranged, and in which situations and for which objectives they will be used. For such reasons, it is important to make end users able to easily and autonomously personalize the behaviour of their Internet of Things applications, so that they can better comply with their specific expectations. In this paper, we present a method and a set of tools that allow end users without programming experience to customize the context-dependent behaviour of their Web applications through the specification of trigger-action rules. The environment is able to support end-user specification of more flexible behaviour than what can be done with existing commercial tools, and it also includes an underlying infrastructure able to detect the possible contextual changes in order to achieve the desired behaviour. The resulting set of tools is able to support the dynamic creation and execution of personalized application versions more suitable for users’ needs in specific contexts of use. Thus, it represents a contribution to obtaining low threshold/high ceiling environments. We also report on an example application in the home automation domain, and a user study that has provided useful positive feedback.",
"title": ""
},
{
"docid": "172835b4eaaf987e93d352177fd583b1",
"text": "A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as “or”, “sum” or “max”, on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.",
"title": ""
},
{
"docid": "e1dd2a719d3389a11323c5245cd2b938",
"text": "Secure identity tokens such as Electronic Identity (eID) cards are emerging everywhere. At the same time user-centric identity management gains acceptance. Anonymous credential schemes are the optimal realization of user-centricity. However, on inexpensive hardware platforms, typically used for eID cards, these schemes could not be made to meet the necessary requirements such as future-proof key lengths and transaction times on the order of 10 seconds. The reasons for this is the need for the hardware platform to be standardized and certified. Therefore an implementation is only possible as a Java Card applet. This results in severe restrictions: little memory (transient and persistent), an 8-bit CPU, and access to hardware acceleration for cryptographic operations only by defined interfaces such as RSA encryption operations.\n Still, we present the first practical implementation of an anonymous credential system on a Java Card 2.2.1. We achieve transaction times that are orders of magnitudes faster than those of any prior attempt, while raising the bar in terms of key length and trust model. Our system is the first one to act completely autonomously on card and to maintain its properties in the face of an untrusted terminal. In addition, we provide a formal system specification and share our solution strategies and experiences gained and with the Java Card.",
"title": ""
},
{
"docid": "bbc802e8653c6ae6cb643acc649de471",
"text": "To overcome the power delivery limitations of batteries and energy storage limitations of ultracapacitors, hybrid energy storage systems, which combine the two energy sources, have been proposed. A comprehensive review of the state of the art is presented. In addition, a method of optimizing the operation of a battery/ultracapacitor hybrid energy storage system (HESS) is presented. The goal is to set the state of charge of the ultracapacitor and the battery in a way which ensures that the available power and energy is sufficient to supply the drivetrain. By utilizing an algorithm where the states of charge of both systems are tightly controlled, we allow for the overall system size to reduce since more power is available from a smaller energy storage system",
"title": ""
},
{
"docid": "15d3605a6c7ceadd0216a9f67915dfdf",
"text": "Rendu-Osler-Weber disease, or hereditary hemorrhagic telangiectasia (HHT), is an autosomal dominant disease characterized by systemic vascular dysplasia. The prevalence varies and ranges, according to region, from 1/3500 to 1/5000. Data concerning Italy are not available. The diagnosis is based on the following criteria: family history, epistaxis, telangiectases and visceral arteriovenous malformations. The diagnosis is to be considered definite if three criteria are present and suspected if two criteria are present. From September 2000 to March 2002, 100 patients (63 males, 37 females, mean age 45.5 +/- 17.3 years) potentially affected by HHT were evaluated in the HHT Center of the \"Augusto Murri\" Internal Medicine Section at the University of Bari (on a day-hospital or hospitalization basis). The diagnosis of HHT was confirmed in 56 patients and suspected in 10. Magnetic resonance imaging revealed cerebral arteriovenous malformations in 8.5% of patients. In 14.6% of patients contrast echocardiography revealed pulmonary arteriovenous malformations subsequently confirmed at multislice computed tomography in all cases but one. In 48.2% of subjects hepatic vascular malformations were revealed by echo color Doppler ultrasonography, whereas abdominal multislice computed tomography was positive in 63.8% of patients. In 64% of the 25 patients, who underwent endoscopy, gastric telangiectases were found. In 3 out of 6 patients presenting with pulmonary arteriovenous malformations, embolotherapy was performed with success. In our patients, the use of tranexamic acid caused a reduction in the frequency of epistaxis. The future objectives of the HHT Center of Bari are to increase knowledge of the disease, to cooperate with other centers with the aim of increasing the number of patients studied and to avoid the limits of therapeutic and diagnostic protocols of a rare disease such as HHT.",
"title": ""
}
] |
scidocsrr
|
40c64c6b6ea0cf9946fe89390ea465b5
|
Performance Analysis of a Part of Speech Tagging Task
|
[
{
"docid": "96669cea810d2918f2d35875f87d45f2",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
}
] |
[
{
"docid": "fe536bcb97b9cb905f68f2f8f0d7ae4e",
"text": "Deep generative models based on Generative Adversarial Networks (GANs) have demonstrated impressive sample quality but in order to work they require a careful choice of architecture, parameter initialization, and selection of hyper-parameters. This fragility is in part due to a dimensional mismatch or non-overlapping support between the model distribution and the data distribution, causing their density ratio and the associated f -divergence to be undefined. We overcome this fundamental limitation and propose a new regularization approach with low computational cost that yields a stable GAN training procedure. We demonstrate the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks. Our regularization turns GAN models into reliable building blocks for deep learning. 1",
"title": ""
},
{
"docid": "b4c0e5b928058e6467d0642db15e0390",
"text": "We study the application of word embeddings to generate semantic representations for the domain adaptation problem of relation extraction (RE) in the tree kernelbased method. We systematically evaluate various techniques to generate the semantic representations and demonstrate that they are effective to improve the generalization performance of a tree kernel-based relation extractor across domains (up to 7% relative improvement). In addition, we compare the tree kernel-based and the feature-based method for RE in a compatible way, on the same resources and settings, to gain insights into which kind of system is more robust to domain changes. Our results and error analysis shows that the tree kernel-based method outperforms the feature-based approach.",
"title": ""
},
{
"docid": "c5851a9fe60c0127a351668ba5b0f21d",
"text": "We examined salivary C-reactive protein (CRP) levels in the context of tobacco smoke exposure (TSE) in healthy youth. We hypothesized that there would be a dose-response relationship between TSE status and salivary CRP levels. This work is a pilot study (N = 45) for a larger investigation in which we aim to validate salivary CRP against serum CRP, the gold standard measurement of low-grade inflammation. Participants were healthy youth with no self-reported periodontal disease, no objectively measured obesity/adiposity, and no clinical depression, based on the Beck Depression Inventory (BDI-II). We assessed tobacco smoking and confirmed smoking status (non-smoking, passive smoking, and active smoking) with salivary cotinine measurement. We measured salivary CRP by the ELISA method. We controlled for several potential confounders. We found evidence for the existence of a dose-response relationship between the TSE status and salivary CRP levels. Our preliminary findings indicate that salivary CRP seems to have a similar relation to TSE as its widely used serum (systemic inflammatory) biomarker counterpart.",
"title": ""
},
{
"docid": "f4db5b7cc70661ff780c96cd58f6624e",
"text": "Error Thresholds and Their Relation to Optimal Mutation Rates p. 54 Are Artificial Mutation Biases Unnatural? p. 64 Evolving Mutation Rates for the Self-Optimisation of Genetic Algorithms p. 74 Statistical Reasoning Strategies in the Pursuit and Evasion Domain p. 79 An Evolutionary Method Using Crossover in a Food Chain Simulation p. 89 On Self-Reproduction and Evolvability p. 94 Some Techniques for the Measurement of Complexity in Tierra p. 104 A Genetic Neutral Model for Quantitative Comparison of Genotypic Evolutionary Activity p. 109",
"title": ""
},
{
"docid": "a89cd3351d6a427d18a461893949e0d7",
"text": "Touch is a powerful vehicle for communication between humans. The way we touch (how) embraces and mediates certain emotions such as anger, joy, fear, or love. While this phenomenon is well explored for human interaction, HCI research is only starting to uncover the fine granularity of sensory stimulation and responses in relation to certain emotions. Within this paper we present the findings from a study exploring the communication of emotions through a haptic system that uses tactile stimulation in mid-air. Here, haptic descriptions for specific emotions (e.g., happy, sad, excited, afraid) were created by one group of users to then be reviewed and validated by two other groups of users. We demonstrate the non-arbitrary mapping between emotions and haptic descriptions across three groups. This points to the huge potential for mediating emotions through mid-air haptics. We discuss specific design implications based on the spatial, directional, and haptic parameters of the created haptic descriptions and illustrate their design potential for HCI based on two design ideas.",
"title": ""
},
{
"docid": "21c1493a2de747f9b5878648ee95d470",
"text": "In this summary of previous work, I argue that data becomes temporarily interesting by itself to some selfimproving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively more “beautiful.” Curiosity is the desire to create or discover more non-random, non-arbitrary, “truly novel,” regular data that allows for compression progress because its regularity was not yet known. This drive maximizes “interestingness,” the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and recent artificial systems.",
"title": ""
},
{
"docid": "9eca36b888845c82cc9e65e6bc0db053",
"text": "Word embeddings resulting from neural language models have been shown to be a great asset for a large variety of NLP tasks. However, such architecture might be difficult and time-consuming to train. Instead, we propose to drastically simplify the word embeddings computation through a Hellinger PCA of the word cooccurence matrix. We compare those new word embeddings with some well-known embeddings on named entity recognition and movie review tasks and show that we can reach similar or even better performance. Although deep learning is not really necessary for generating good word embeddings, we show that it can provide an easy way to adapt embeddings to specific tasks.",
"title": ""
},
{
"docid": "909d9d1b9054586afc4b303e94acae73",
"text": "Humans learn to solve tasks of increasing complexity by building on top of previously acquired knowledge. Typically, there exists a natural progression in the tasks that we learn – most do not require completely independent solutions, but can be broken down into simpler subtasks. We propose to represent a solver for each task as a neural module that calls existing modules (solvers for simpler tasks) in a program-like manner. Lower modules are a black box to the calling module, and communicate only via a query and an output. Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output. Each module also contains a residual component that learns to solve aspects of the new task that lower modules cannot solve. Our model effectively combines previous skill-sets, does not suffer from forgetting, and is fully differentiable. We test our model in learning a set of visual reasoning tasks, and demonstrate state-ofthe-art performance in Visual Question Answering, the highest-level task in our task set. By evaluating the reasoning process using non-expert human judges, we show that our model is more interpretable than an attention-based baseline.",
"title": ""
},
{
"docid": "08260ba76f242725b8a08cbd8e4ec507",
"text": "Vocal singing (singing with lyrics) shares features common to music and language but it is not clear to what extent they use the same brain systems, particularly at the higher cortical level, and how this varies with expertise. Twenty-six participants of varying singing ability performed two functional imaging tasks. The first examined covert generative language using orthographic lexical retrieval while the second required covert vocal singing of a well-known song. The neural networks subserving covert vocal singing and language were found to be proximally located, and their extent of cortical overlap varied with singing expertise. Nonexpert singers showed greater engagement of their language network during vocal singing, likely accounting for their less tuneful performance. In contrast, expert singers showed a more unilateral pattern of activation associated with reduced engagement of the right frontal lobe. The findings indicate that singing expertise promotes independence from the language network with decoupling producing more tuneful performance. This means that the age-old singing practice of 'finding your singing voice' may be neurologically mediated by changing how strongly singing is coupled to the language system.",
"title": ""
},
{
"docid": "ce282fba1feb109e03bdb230448a4f8a",
"text": "The goal of two-sample tests is to assess whether two samples, SP ∼ P and SQ ∼ Q, are drawn from the same distribution. Perhaps intriguingly, one relatively unexplored method to build two-sample tests is the use of binary classifiers. In particular, construct a dataset by pairing the n examples in SP with a positive label, and by pairing the m examples in SQ with a negative label. If the null hypothesis “P = Q” is true, then the classification accuracy of a binary classifier on a held-out subset of this dataset should remain near chance-level. As we will show, such Classifier Two-Sample Tests (C2ST) learn a suitable representation of the data on the fly, return test statistics in interpretable units, have a simple null distribution, and their predictive uncertainty allow to interpret where P and Q differ. The goal of this paper is to establish the properties, performance, and uses of C2ST. First, we analyze their main theoretical properties. Second, we compare their performance against a variety of state-of-the-art alternatives. Third, we propose their use to evaluate the sample quality of generative models with intractable likelihoods, such as Generative Adversarial Networks (GANs). Fourth, we showcase the novel application of GANs together with C2ST for causal discovery.",
"title": ""
},
{
"docid": "7ddfa92cee856e2ef24caf3e88d92b93",
"text": "Applications are getting increasingly interconnected. Although the interconnectedness provide new ways to gather information about the user, not all user information is ready to be directly implemented in order to provide a personalized experience to the user. Therefore, a general model is needed to which users’ behavior, preferences, and needs can be connected to. In this paper we present our works on a personality-based music recommender system in which we use users’ personality traits as a general model. We identified relationships between users’ personality and their behavior, preferences, and needs, and also investigated different ways to infer users’ personality traits from user-generated data of social networking sites (i.e., Facebook, Twitter, and Instagram). Our work contributes to new ways to mine and infer personality-based user models, and show how these models can be implemented in a music recommender system to positively contribute to the user experience.",
"title": ""
},
{
"docid": "237345020161bab7ce0b0bba26c5cc98",
"text": "This paper addresses the difficulty of designing 1-V capable analog circuits in standard digital complementary metal–oxide–semiconductor (CMOS) technology. Design techniques for facilitating 1-V operation are discussed and 1-V analog building block circuits are presented. Most of these circuits use the bulk-driving technique to circumvent the metal– oxide–semiconductor field-effect transistor turn-on (threshold) voltage requirement. Finally, techniques are combined within a 1-V CMOS operational amplifier with rail-to-rail input and output ranges. While consuming 300 W, the 1-V rail-to-rail CMOS op amp achieves 1.3-MHz unity-gain frequency and 57 phase margin for a 22-pF load capacitance.",
"title": ""
},
{
"docid": "d398f84df8acea46b00e0045c4f143a5",
"text": "Online review forums provide consumers with essential information about goods and services by facilitating word-of-mouth communication. Despite that preferences are correlated to demographic characteristics, reviewer gender is not often provided on user profiles. We consider the case of the internet movie database (IMDb), where users exchange views on movies. Like many forums, IMDb employs collaborative filtering such that by default, reviews are ranked by perceived utility. IMDb also provides a unique gender filter that displays an equal number of reviews authored by men and women. Using logistic classification, we compare reviews with respect to writing style, content and metadata features. We find salient differences in stylistic features and content between reviews written by men and women, as predicted by sociolinguistic theory. However, utility is the best predictor of gender, with women’s reviews perceived as being much less useful than those written by men. While we cannot observe who votes at IMDb, we do find that highly rated female-authored reviews exhibit “male” characteristics. Our results have implications for which contributions are likely to be seen, and to what extent participants get a balanced view as to “what others think” about an item.",
"title": ""
},
{
"docid": "a5a86ecd39df5b032f4fa4f22362c914",
"text": "Diet strongly affects human health, partly by modulating gut microbiome composition. We used diet inventories and 16S rDNA sequencing to characterize fecal samples from 98 individuals. Fecal communities clustered into enterotypes distinguished primarily by levels of Bacteroides and Prevotella. Enterotypes were strongly associated with long-term diets, particularly protein and animal fat (Bacteroides) versus carbohydrates (Prevotella). A controlled-feeding study of 10 subjects showed that microbiome composition changed detectably within 24 hours of initiating a high-fat/low-fiber or low-fat/high-fiber diet, but that enterotype identity remained stable during the 10-day study. Thus, alternative enterotype states are associated with long-term diet.",
"title": ""
},
{
"docid": "81ca5239dbd60a988e7457076aac05d7",
"text": "OBJECTIVE\nFrontline health professionals need a \"red flag\" tool to aid their decision making about whether to make a referral for a full diagnostic assessment for an autism spectrum condition (ASC) in children and adults. The aim was to identify 10 items on the Autism Spectrum Quotient (AQ) (Adult, Adolescent, and Child versions) and on the Quantitative Checklist for Autism in Toddlers (Q-CHAT) with good test accuracy.\n\n\nMETHOD\nA case sample of more than 1,000 individuals with ASC (449 adults, 162 adolescents, 432 children and 126 toddlers) and a control sample of 3,000 controls (838 adults, 475 adolescents, 940 children, and 754 toddlers) with no ASC diagnosis participated. Case participants were recruited from the Autism Research Centre's database of volunteers. The control samples were recruited through a variety of sources. Participants completed full-length versions of the measures. The 10 best items were selected on each instrument to produce short versions.\n\n\nRESULTS\nAt a cut-point of 6 on the AQ-10 adult, sensitivity was 0.88, specificity was 0.91, and positive predictive value (PPV) was 0.85. At a cut-point of 6 on the AQ-10 adolescent, sensitivity was 0.93, specificity was 0.95, and PPV was 0.86. At a cut-point of 6 on the AQ-10 child, sensitivity was 0.95, specificity was 0.97, and PPV was 0.94. At a cut-point of 3 on the Q-CHAT-10, sensitivity was 0.91, specificity was 0.89, and PPV was 0.58. Internal consistency was >0.85 on all measures.\n\n\nCONCLUSIONS\nThe short measures have potential to aid referral decision making for specialist assessment and should be further evaluated.",
"title": ""
},
{
"docid": "918c3a6a045454dc9c56b3a0744101e2",
"text": "Cancer immune surveillance is considered to be an important host protection process to inhibit carcinogenesis and to maintain cellular homeostasis. In the interaction of host and tumour cells, three essential phases have been proposed: elimination, equilibrium and escape, which are designated the 'three E's'. Several immune effector cells and secreted cytokines play a critical role in pursuing each process. Nascent transformed cells can initially be eliminated by an innate immune response such as by natural killer cells. During tumour progression, even though an adaptive immune response can be provoked by antigen-specific T cells, immune selection produces tumour cell variants that lose major histocompatibility complex class I and II antigens and decreases amounts of tumour antigens in the equilibrium phase. Furthermore, tumour-derived soluble factors facilitate the escape from immune attack, allowing progression and metastasis. In this review, the central roles of effector cells and cytokines in tumour immunity, and the escape mechanisms of tumour cells during tumour progression are discussed.",
"title": ""
},
{
"docid": "81bfa44ec29532d07031fa3b74ba818d",
"text": "We propose a recurrent extension of the Ladder networks [22] whose structure is motivated by the inference required in hierarchical latent variable models. We demonstrate that the recurrent Ladder is able to handle a wide variety of complex learning tasks that benefit from iterative inference and temporal modeling. The architecture shows close-to-optimal results on temporal modeling of video data, competitive results on music modeling, and improved perceptual grouping based on higher order abstractions, such as stochastic textures and motion cues. We present results for fully supervised, semi-supervised, and unsupervised tasks. The results suggest that the proposed architecture and principles are powerful tools for learning a hierarchy of abstractions, learning iterative inference and handling temporal information.",
"title": ""
},
{
"docid": "41e434a96d528881434e39c536f6b4e7",
"text": "Following recent breakthroughs in convolutional neural networks and monolithic model architectures, state-ofthe-art object detection models can reliably and accurately scale into the realm of up to thousands of classes. Things quickly break down, however, when scaling into the tens of thousands, or, eventually, to millions or billions of unique objects. Further, bounding box-trained end-to-end models require extensive training data. Even though – with some tricks using hierarchies – one can sometimes scale up to thousands of classes, the labor requirements for clean image annotations quickly get out of control. In this paper, we present a two-layer object detection method for brand logos and other stylized objects for which prototypical images exist. It can scale to large numbers of unique classes. Our first layer is a CNN from the Single Shot Multibox Detector family of models that learns to propose regions where some stylized object is likely to appear. The contents of a proposed bounding box is then run against an image index that is targeted for the retrieval task at hand. The proposed architecture scales to a large number of object classes, allows to continuously add new classes without retraining, and exhibits state-of-the-art quality on a stylized object detection task such as logo recognition.",
"title": ""
},
{
"docid": "6952a28e63c231c1bfb43391a21e80fd",
"text": "Deep learning has attracted tremendous attention from researchers in various fields of information engineering such as AI, computer vision, and language processing [Kalchbrenner and Blunsom, 2013; Krizhevsky et al., 2012; Mnih et al., 2013], but also from more traditional sciences such as physics, biology, and manufacturing [Anjos et al., 2015; Baldi et al., 2014; Bergmann et al., 2014]. Neural networks, image processing tools such as convolutional neural networks, sequence processing models such as recurrent neural networks, and regularisation tools such as dropout, are used extensively. However, fields such as physics, biology, and manufacturing are ones in which representing model uncertainty is of crucial importance [Ghahramani, 2015; Krzywinski and Altman, 2013]. With the recent shift in many of these fields towards the use of Bayesian uncertainty [Herzog and Ostwald, 2013; Nuzzo, 2014; Trafimow and Marks, 2015], new needs arise from deep learning. In this work we develop tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation. In the first part of this thesis we develop the theory for such tools, providing applications and illustrative examples. We tie approximate inference in Bayesian models to dropout and other stochastic regularisation techniques, and assess the approximations empirically. We give example applications arising from this connection between modern deep learning and Bayesian modelling such as active learning of image data and data efficient deep reinforcement learning. We further demonstrate the method’s practicality through a survey of recent applications making use of the suggested tools in language applications, medical diagnostics, bioinformatics, image processing, and autonomous driving. In the second part of the thesis we explore its theoretical implications, and the insights stemming from the link between Bayesian modelling and deep learning. We discuss what determines model uncertainty properties, analyse the approximate inference analytically in the linear case, and theoretically examine various priors such as spike and slab priors.",
"title": ""
}
] |
scidocsrr
|
379bc9f0d7e44547dd6a08eb885ccc15
|
Anomaly Detection in Wireless Sensor Networks in a Non-Stationary Environment
|
[
{
"docid": "60fe7f27cd6312c986b679abce3fdea7",
"text": "In matters of great importance that have financial, medical, social, or other implications, we often seek a second opinion before making a decision, sometimes a third, and sometimes many more. In doing so, we weigh the individual opinions, and combine them through some thought process to reach a final decision that is presumably the most informed one. The process of consulting \"several experts\" before making a final decision is perhaps second nature to us; yet, the extensive benefits of such a process in automated decision making applications have only recently been discovered by computational intelligence community. Also known under various other names, such as multiple classifier systems, committee of classifiers, or mixture of experts, ensemble based systems have shown to produce favorable results compared to those of single-expert systems for a broad range of applications and under a variety of scenarios. Design, implementation and application of such systems are the main topics of this article. Specifically, this paper reviews conditions under which ensemble based systems may be more beneficial than their single classifier counterparts, algorithms for generating individual components of the ensemble systems, and various procedures through which the individual classifiers can be combined. We discuss popular ensemble based algorithms, such as bagging, boosting, AdaBoost, stacked generalization, and hierarchical mixture of experts; as well as commonly used combination rules, including algebraic combination of outputs, voting based techniques, behavior knowledge space, and decision templates. Finally, we look at current and future research directions for novel applications of ensemble systems. Such applications include incremental learning, data fusion, feature selection, learning with missing features, confidence estimation, and error correcting output codes; all areas in which ensemble systems have shown great promise",
"title": ""
},
{
"docid": "3be38e070678e358e23cb81432033062",
"text": "W ireless integrated network sensors (WINS) provide distributed network and Internet access to sensors, controls, and processors deeply embedded in equipment, facilities, and the environment. The WINS network represents a new monitoring and control capability for applications in such industries as transportation, manufacturing, health care, environmental oversight, and safety and security. WINS combine microsensor technology and low-power signal processing, computation, and low-cost wireless networking in a compact system. Recent advances in integrated circuit technology have enabled construction of far more capable yet inexpensive sensors, radios, and processors, allowing mass production of sophisticated systems linking the physical world to digital data networks [2–5]. Scales range from local to global for applications in medicine, security, factory automation, environmental monitoring, and condition-based maintenance. Compact geometry and low cost allow WINS to be embedded and distributed at a fraction of the cost of conventional wireline sensor and actuator systems. WINS opportunities depend on development of a scalable, low-cost, sensor-network architecture. Such applications require delivery of sensor information to the user at a low bit rate through low-power transceivers. Continuous sensor signal processing enables the constant monitoring of events in an environment in which short message packets would suffice. Future applications of distributed embedded processors and sensors will require vast numbers of devices. Conventional methods of sensor networking represent an impractical demand on cable installation and network bandwidth. Processing at the source would drastically reduce the financial, computational, and management burden on communication system",
"title": ""
}
] |
[
{
"docid": "2fa6f761f22e0484a84f83e5772bef40",
"text": "We consider the problem of planning smooth paths for a vehicle in a region bounded by polygonal chains. The paths are represented as B-spline functions. A path is found by solving an optimization problem using a cost function designed to care for both the smoothness of the path and the safety of the vehicle. Smoothness is defined as small magnitude of the derivative of curvature and safety is defined as the degree of centering of the path between the polygonal chains. The polygonal chains are preprocessed in order to remove excess parts and introduce safety margins for the vehicle. The method has been implemented for use with a standard solver and tests have been made on application data provided by the Swedish mining company LKAB.",
"title": ""
},
{
"docid": "ba0dce539f33496dedac000b61efa971",
"text": "The webpage aesthetics is one of the factors that affect the way people are attracted to a site. But two questions emerge: how can we improve a webpage's aesthetics and how can we evaluate this item? In order to solve this problem, we identified some of the theory that is underlying graphic design, gestalt theory and multimedia design. Based in the literature review, we proposed principles for web site design. We also propose a tool to evaluate web design.",
"title": ""
},
{
"docid": "e726e11f855515017de77508b79d3308",
"text": "OBJECTIVES\nThis study was conducted to better understand the characteristics of chronic pain patients seeking treatment with medicinal cannabis (MC).\n\n\nDESIGN\nRetrospective chart reviews of 139 patients (87 males, median age 47 years; 52 females, median age 48 years); all were legally qualified for MC use in Washington State.\n\n\nSETTING\nRegional pain clinic staffed by university faculty.\n\n\nPARTICIPANTS\n\n\n\nINCLUSION CRITERIA\nage 18 years and older; having legally accessed MC treatment, with valid documentation in their medical records. All data were de-identified.\n\n\nMAIN OUTCOME MEASURES\nRecords were scored for multiple indicators, including time since initial MC authorization, qualifying condition(s), McGill Pain score, functional status, use of other analgesic modalities, including opioids, and patterns of use over time.\n\n\nRESULTS\nOf 139 patients, 15 (11 percent) had prior authorizations for MC before seeking care in this clinic. The sample contained 236.4 patient-years of authorized MC use. Time of authorized use ranged from 11 days to 8.31 years (median of 1.12 years). Most patients were male (63 percent) yet female patients averaged 0.18 years longer authorized use. There were no other gender-specific trends or factors. Most patients (n = 123, 88 percent) had more than one pain syndrome present. Myofascial pain syndrome was the most common diagnosis (n = 114, 82 percent), followed by neuropathic pain (n = 89, 64 percent), discogenic back pain (n = 72, 51.7 percent), and osteoarthritis (n = 37, 26.6 percent). Other diagnoses included diabetic neuropathy, central pain syndrome, phantom pain, spinal cord injury, fibromyalgia, rheumatoid arthritis, HIV neuropathy, visceral pain, and malignant pain. In 51 (37 percent) patients, there were documented instances of major hurdles related to accessing MC, including prior physicians unwilling to authorize use, legal problems related to MC use, and difficulties in finding an affordable and consistent supply of MC.\n\n\nCONCLUSIONS\nData indicate that males and females access MC at approximately the same rate, with similar median authorization times. Although the majority of patient records documented significant symptom alleviation with MC, major treatment access and delivery barriers remain.",
"title": ""
},
{
"docid": "b6dcf2064ad7f06fd1672b1348d92737",
"text": "In this paper, we propose a two-step method to recognize multiple-food images by detecting candidate regions with several methods and classifying them with various kinds of features. In the first step, we detect several candidate regions by fusing outputs of several region detectors including Felzenszwalb's deformable part model (DPM) [1], a circle detector and the JSEG region segmentation. In the second step, we apply a feature-fusion-based food recognition method for bounding boxes of the candidate regions with various kinds of visual features including bag-of-features of SIFT and CSIFT with spatial pyramid (SP-BoF), histogram of oriented gradient (HoG), and Gabor texture features. In the experiments, we estimated ten food candidates for multiple-food images in the descending order of the confidence scores. As results, we have achieved the 55.8% classification rate, which improved the baseline result in case of using only DPM by 14.3 points, for a multiple-food image data set. This demonstrates that the proposed two-step method is effective for recognition of multiple-food images.",
"title": ""
},
{
"docid": "d47143c38598cf88eeb8be654f8a7a00",
"text": "Long Short-Term Memory (LSTM) networks have yielded excellent results on handwriting recognition. This paper describes an application of bidirectional LSTM networks to the problem of machine-printed Latin and Fraktur recognition. Latin and Fraktur recognition differs significantly from handwriting recognition in both the statistical properties of the data, as well as in the required, much higher levels of accuracy. Applications of LSTM networks to handwriting recognition use two-dimensional recurrent networks, since the exact position and baseline of handwritten characters is variable. In contrast, for printed OCR, we used a one-dimensional recurrent network combined with a novel algorithm for baseline and x-height normalization. A number of databases were used for training and testing, including the UW3 database, artificially generated and degraded Fraktur text and scanned pages from a book digitization project. The LSTM architecture achieved 0.6% character-level test-set error on English text. When the artificially degraded Fraktur data set is divided into training and test sets, the system achieves an error rate of 1.64%. On specific books printed in Fraktur (not part of the training set), the system achieves error rates of 0.15% (Fontane) and 1.47% (Ersch-Gruber). These recognition accuracies were found without using any language modelling or any other post-processing techniques.",
"title": ""
},
{
"docid": "0b0273a1e2aeb98eb4115113c8957fd2",
"text": "This paper deals with the approach of integrating a bidirectional boost-converter into the drivetrain of a (hybrid) electric vehicle in order to exploit the full potential of the electric drives and the battery. Currently, the automotive norms and standards are defined based on the characteristics of the voltage source. The current technologies of batteries for automotive applications have voltage which depends on the load and the state-of charge. The aim of this paper is to provide better system performance by stabilizing the voltage without the need of redesigning any of the current components in the system. To show the added-value of the proposed electrical topology, loss estimation is developed and proved based on actual components measurements and design. The component and its modelling is then implemented in a global system simulation environment of the electric architecture to show how it contributes enhancing the performance of the system.",
"title": ""
},
{
"docid": "affa4a43b68f8c158090df3a368fe6b6",
"text": "The purpose of this study is to evaluate the impact of modulated light projections perceived through the eyes on the autonomic nervous system (ANS). Three types of light projections, each containing both specific colors and specific modulations in the brainwaves frequency range, were tested, in addition to a placebo projection consisting of non-modulated white light. Evaluation was done using a combination of physiological measures (HR, HRV, SC) and psychological tests (Amen, POMS). Significant differences were found in the ANS effects of each of the colored light projections, and also between the colored and white projections.",
"title": ""
},
{
"docid": "49f96e96623502ffe6053cab43054edf",
"text": "Background YouTube, the online video creation and sharing site, supports both video content viewing and content creation activities. For a minority of people, the time spent engaging with YouTube can be excessive and potentially problematic. Method This study analyzed the relationship between content viewing, content creation, and YouTube addiction in a survey of 410 Indian-student YouTube users. It also examined the influence of content, social, technology, and process gratifications on user inclination toward YouTube content viewing and content creation. Results The results demonstrated that content creation in YouTube had a closer relationship with YouTube addiction than content viewing. Furthermore, social gratification was found to have a significant influence on both types of YouTube activities, whereas technology gratification did not significantly influence them. Among all perceived gratifications, content gratification had the highest relationship coefficient value with YouTube content creation inclination. The model fit and variance extracted by the endogenous constructs were good, which further validated the results of the analysis. Conclusion The study facilitates new ways to explore user gratification in using YouTube and how the channel responds to it.",
"title": ""
},
{
"docid": "21ad29105c4b6772b05156afd33ac145",
"text": "High resolution Digital Surface Models (DSMs) produced from airborne laser-scanning or stereo satellite images provide a very useful source of information for automated 3D building reconstruction. In this paper an investigation is reported about extraction of 3D building models from high resolution DSMs and orthorectified images produced from Worldview-2 stereo satellite imagery. The focus is on the generation of 3D models of parametric building roofs, which is the basis for creating Level Of Detail 2 (LOD2) according to the CityGML standard. In particular the building blocks containing several connected buildings with tilted roofs are investigated and the potentials and limitations of the modeling approach are discussed. The edge information extracted from orthorectified image has been employed as additional source of information in 3D reconstruction algorithm. A model driven approach based on the analysis of the 3D points of DSMs in a 2D projection plane is proposed. Accordingly, a building block is divided into smaller parts according to the direction and number of existing ridge lines for parametric building reconstruction. The 3D model is derived for each building part, and finally, a complete parametric model is formed by merging the 3D models of the individual building parts and adjusting the nodes after the merging step. For the remaining building parts that do not contain ridge lines, a prismatic model using polygon approximation of the corresponding boundary pixels is derived and merged to the parametric models to shape the final model of the building. A qualitative and quantitative assessment of the proposed method for the automatic reconstruction of buildings with parametric roofs is then provided by comparing the final model with the existing surface model as well as some field measurements. Remote Sens. 2013, 5 1682",
"title": ""
},
{
"docid": "c89ce1ded524ff65c1ebd3d20be155bc",
"text": "Actuarial risk assessment tools are used extensively to predict future violence, but previous studies comparing their predictive accuracies have produced inconsistent findings as a result of various methodological issues. We conducted meta-analyses of the effect sizes of 9 commonly used risk assessment tools and their subscales to compare their predictive efficacies for violence. The effect sizes were extracted from 28 original reports published between 1999 and 2008, which assessed the predictive accuracy of more than one tool. We used a within-subject design to improve statistical power and multilevel regression models to disentangle random effects of variation between studies and tools and to adjust for study features. All 9 tools and their subscales predicted violence at about the same moderate level of predictive efficacy with the exception of Psychopathy Checklist--Revised (PCL-R) Factor 1, which predicted violence only at chance level among men. Approximately 25% of the total variance was due to differences between tools, whereas approximately 85% of heterogeneity between studies was explained by methodological features (age, length of follow-up, different types of violent outcome, sex, and sex-related interactions). Sex-differentiated efficacy was found for a small number of the tools. If the intention is only to predict future violence, then the 9 tools are essentially interchangeable; the selection of which tool to use in practice should depend on what other functions the tool can perform rather than on its efficacy in predicting violence. The moderate level of predictive accuracy of these tools suggests that they should not be used solely for some criminal justice decision making that requires a very high level of accuracy such as preventive detention.",
"title": ""
},
{
"docid": "16741aac03ea1a864ddab65c8c73eb7c",
"text": "This report describes a preliminary evaluation of performance of a cell-FPGA-like architecture for future hybrid \"CMOL\" circuits. Such circuits will combine a semiconduc-tor-transistor (CMOS) stack and a two-level nanowire crossbar with molecular-scale two-terminal nanodevices (program-mable diodes) formed at each crosspoint. Our cell-based architecture is based on a uniform CMOL fabric of \"tiles\". Each tile consists of 12 four-transistor basic cells and one (four times larger) latch cell. Due to high density of nanodevices, which may be used for both logic and routing functions, CMOL FPGA may be reconfigured around defective nanodevices to provide high defect tolerance. Using a semi-custom set of design automation tools we have evaluated CMOL FPGA performance for the Toronto 20 benchmark set, so far without optimization of several parameters including the power supply voltage and nanowire pitch. The results show that even without such optimization, CMOL FPGA circuits may provide a density advantage of more than two orders of magnitude over the traditional CMOS FPGA with the same CMOS design rules, at comparable time delay, acceptable power consumption and potentially high defect tolerance.",
"title": ""
},
{
"docid": "cffce89fbb97dc1d2eb31a060a335d3c",
"text": "This doctoral thesis deals with a number of challenges related to investigating and devising solutions to the Sentiment Analysis Problem, a subset of the discipline known as Natural Language Processing (NLP), following a path that differs from the most common approaches currently in-use. The majority of the research and applications building in Sentiment Analysis (SA) / Opinion Mining (OM) have been conducted and developed using Supervised Machine Learning techniques. It is our intention to prove that a hybrid approach merging fuzzy sets, a solid sentiment lexicon, traditional NLP techniques and aggregation methods will have the effect of compounding the power of all the positive aspects of these tools. In this thesis we will prove three main aspects, namely: 1. That a Hybrid Classification Model based on the techniques mentioned in the previous paragraphs will be capable of: (a) performing same or better than established Supervised Machine Learning techniques -namely, Naı̈ve Bayes and Maximum Entropy (ME)when the latter are utilised respectively as the only classification methods being applied, when calculating subjectivity polarity, and (b) computing the intensity of the polarity previously estimated. 2. That cross-ratio uninorms can be used to effectively fuse the classification outputs of several algorithms producing a compensatory effect. 3. That the Induced Ordered Weighted Averaging (IOWA) operator is a very good choice to model the opinion of the majority (consensus) when the outputs of a number of classification methods are combined together. For academic and experimental purposes we have built the proposed methods and associated prototypes in an iterative fashion: • Step 1: we start with the so-called Hybrid Standard Classification (HSC) method, responsible for subjectivity polarity determination. • Step 2: then, we have continued with the Hybrid Advanced Classification (HAC) method that computes the polarity intensity of opinions/sentiments. • Step 3: in closing, we present two methods that produce a semantic-specific aggregation of two or more classification methods, as a complement to the HSC/HAC methods when the latter cannot generate a classification value or when we are looking for an aggregation that implies consensus, respectively: ◦ the Hybrid Advanced Classification with Aggregation by Cross-ratio Uninorm (HACACU) method. ◦ the Hybrid Advanced Classification with Aggregation by Consensus (HACACO) method.",
"title": ""
},
{
"docid": "8c853251e0fb408c829e6f99a581d4cf",
"text": "We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.",
"title": ""
},
{
"docid": "fb89a5aa87f1458177d6a32ef25fdf3b",
"text": "The increase in population, the rapid economic growth and the rise in community living standards accelerate municipal solid waste (MSW) generation in developing cities. This problem is especially serious in Pudong New Area, Shanghai, China. The daily amount of MSW generated in Pudong was about 1.11 kg per person in 2006. According to the current population growth trend, the solid waste quantity generated will continue to increase with the city's development. In this paper, we describe a waste generation and composition analysis and provide a comprehensive review of municipal solid waste management (MSWM) in Pudong. Some of the important aspects of waste management, such as the current status of waste collection, transport and disposal in Pudong, will be illustrated. Also, the current situation will be evaluated, and its problems will be identified.",
"title": ""
},
{
"docid": "bcd16100ca6814503e876f9f15b8c7fb",
"text": "OBJECTIVE\nBrain-computer interfaces (BCIs) are devices that enable severely disabled people to communicate and interact with their environments using their brain waves. Most studies investigating BCI in humans have used scalp EEG as the source of electrical signals and focused on motor control of prostheses or computer cursors on a screen. The authors hypothesize that the use of brain signals obtained directly from the cortical surface will more effectively control a communication/spelling task compared to scalp EEG.\n\n\nMETHODS\nA total of 6 patients with medically intractable epilepsy were tested for the ability to control a visual keyboard using electrocorticographic (ECOG) signals. ECOG data collected during a P300 visual task paradigm were preprocessed and used to train a linear classifier to subsequently predict the intended target letters.\n\n\nRESULTS\nThe classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in 5 of the 6 people tested. ECOG data from electrodes outside the language cortex contributed to the classifier and enabled participants to write words on a visual keyboard.\n\n\nCONCLUSIONS\nThis is a novel finding because previous invasive BCI research in humans used signals exclusively from the motor cortex to control a computer cursor or prosthetic device. These results demonstrate that ECOG signals from electrodes both overlying and outside the language cortex can reliably control a visual keyboard to generate language output without voice or limb movements.",
"title": ""
},
{
"docid": "8e324cf4900431593d9ebc73e7809b23",
"text": "Even though there is a plethora of studies investigating the challenges of adopting ebanking services, a search through the literature indicates that prior studies have investigated either user adoption challenges or the bank implementation challenges. This study integrated both perspectives to provide a broader conceptual framework for investigating challenges banks face in marketing e-banking services in developing country such as Ghana. The results from the mixed method study indicated that institutional–based challenges as well as userbased challenges affect the marketing of e-banking products in Ghana. The strategic implications of the findings for marketing ebanking services are discussed to guide managers to implement e-banking services in Ghana.",
"title": ""
},
{
"docid": "62166980f94bba5e75c9c6ad4a4348f1",
"text": "In this paper the design and the implementation of a linear, non-uniform antenna array for a 77-GHz MIMO FMCW system that allows for the estimation of both the distance and the angular position of a target are presented. The goal is to achieve a good trade-off between the main beam width and the side lobe level. The non-uniform spacing in addition with the MIMO principle offers a superior performance compared to a classical uniform half-wavelength antenna array with an equal number of elements. However the design becomes more complicated and can not be tackled using analytical methods. Starting with elementary array factor considerations the design is approached using brute force, stepwise brute force, and particle swarm optimization. The particle swarm optimized array was also implemented. Simulation results and measurements are presented and discussed.",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
},
{
"docid": "13974867d98411b6a999374afcc5b2cb",
"text": "Current best local descriptors are learned on a large dataset of matching and non-matching keypoint pairs. However, data of this kind is not always available since detailed keypoint correspondences can be hard to establish. On the other hand, we can often obtain labels for pairs of keypoint bags. For example, keypoint bags extracted from two images of the same object under different views form a matching pair, and keypoint bags extracted from images of different objects form a non-matching pair. On average, matching pairs should contain more corresponding keypoints than non-matching pairs. We describe an end-to-end differentiable architecture that enables the learning of local keypoint descriptors from such weakly-labeled data.",
"title": ""
},
{
"docid": "bc7f80192416aa7787657aed1bda3997",
"text": "In this paper we propose a deep learning technique to improve the performance of semantic segmentation tasks. Previously proposed algorithms generally suffer from the over-dependence on a single modality as well as a lack of training data. We made three contributions to improve the performance. Firstly, we adopt two models which are complementary in our framework to enrich field-of-views and features to make segmentation more reliable. Secondly, we repurpose the datasets form other tasks to the segmentation task by training the two models in our framework on different datasets. This brings the benefits of data augmentation while saving the cost of image annotation. Thirdly, the number of parameters in our framework is minimized to reduce the complexity of the framework and to avoid over- fitting. Experimental results show that our framework significantly outperforms the current state-of-the-art methods with a smaller number of parameters and better generalization ability.",
"title": ""
}
] |
scidocsrr
|
8f37b402bb1ac9b58883707aee4a2b5c
|
RELIABILITY-BASED MANAGEMENT OF BURIED PIPELINES
|
[
{
"docid": "150e7a6f46e93fc917e43e32dedd9424",
"text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.",
"title": ""
}
] |
[
{
"docid": "8abd03202f496de4bec6270946d53a9c",
"text": "In this paper, we use time-series modeling to forecast taxi travel demand, in the context of a mobile application-based taxi hailing service. In particular, we model the passenger demand density at various locations in the city of Bengaluru, India. Using the data, we first shortlist time-series models that suit our application. We then analyse the performance of these models by using Mean Absolute Percentage Error (MAPE) as the performance metric. In order to improve the model performance, we employ a multi-level clustering technique where we aggregate demand over neighboring cells/geohashes. We observe that the improved model based on clustering leads to a forecast accuracy of 80% per km2. In addition, our technique obtains an accuracy of 89% per km2 for the most frequently occurring use case.",
"title": ""
},
{
"docid": "80e9f9261397cb378920a6c897fd352a",
"text": "Purpose: This study develops a comprehensive research model that can explain potential customers’ behavioral intentions to adopt and use smart home services. Methodology: This study proposes and validates a new theoretical model that extends the theory of planned behavior (TPB). Partial least squares analysis (PLS) is employed to test the research model and corresponding hypotheses on data collected from 216 survey samples. Findings: Mobility, security/privacy risk, and trust in the service provider are important factors affecting the adoption of smart home services. Practical implications: To increase potential users’ adoption rate, service providers should focus on developing mobility-related services that enable people to access smart home services while on the move using mobile devices via control and monitoring functions. Originality/Value: This study is the first empirical attempt to examine user acceptance of smart home services, as most of the prior literature has concerned technical features.",
"title": ""
},
{
"docid": "7bd440a6c7aece364877dbb5170cfcfb",
"text": "Semantic representation lies at the core of several applications in Natural Language Processing. However, most existing semantic representation techniques cannot be used effectively for the representation of individual word senses. We put forward a novel multilingual concept representation, called MUFFIN, which not only enables accurate representation of word senses in different languages, but also provides multiple advantages over existing approaches. MUFFIN represents a given concept in a unified semantic space irrespective of the language of interest, enabling cross-lingual comparison of different concepts. We evaluate our approach in two different evaluation benchmarks, semantic similarity and Word Sense Disambiguation, reporting state-of-the-art performance on several standard datasets.",
"title": ""
},
{
"docid": "29e56287071ca1fc1bf3d83f67b3ce8d",
"text": "In this paper, we seek to identify factors that might increase the likelihood of adoption and continued use of cyberinfrastructure by scientists. To do so, we review the main research on Information and Communications Technology (ICT) adoption and use by addressing research problems, theories and models used, findings, and limitations. We focus particularly on the individual user perspective. We categorize previous studies into two groups: Adoption research and post-adoption (continued use) research. In addition, we review studies specifically regarding cyberinfrastructure adoption and use by scientists and other special user groups. We identify the limitations of previous theories, models and research findings appearing in the literature related to our current interest in scientists’ adoption and continued use of cyber-infrastructure. We synthesize the previous theories and models used for ICT adoption and use, and then we develop a theoretical framework for studying scientists’ adoption and use of cyber-infrastructure. We also proposed a research design based on the research model developed. Implications for researchers and practitioners are provided.",
"title": ""
},
{
"docid": "da9ffb00398f6aad726c247e3d1f2450",
"text": "We propose noWorkflow, a tool that transparently captures provenance of scripts and enables reproducibility. Unlike existing approaches, noWorkflow is non-intrusive and does not require users to change the way they work – users need not wrap their experiments in scientific workflow systems, install version control systems, or instrument their scripts. The tool leverages Software Engineering techniques, such as abstract syntax tree analysis, reflection, and profiling, to collect different types of provenance, including detailed information about the underlying libraries. We describe how noWorkflow captures multiple kinds of provenance and the different classes of analyses it supports: graph-based visualization; differencing over provenance trails; and inference queries.",
"title": ""
},
{
"docid": "59e02bc986876edc0ee0a97fd4d12a28",
"text": "CONTEXT\nSocial anxiety disorder is thought to involve emotional hyperreactivity, cognitive distortions, and ineffective emotion regulation. While the neural bases of emotional reactivity to social stimuli have been described, the neural bases of emotional reactivity and cognitive regulation during social and physical threat, and their relationship to social anxiety symptom severity, have yet to be investigated.\n\n\nOBJECTIVE\nTo investigate behavioral and neural correlates of emotional reactivity and cognitive regulation in patients and controls during processing of social and physical threat stimuli.\n\n\nDESIGN\nParticipants were trained to implement cognitive-linguistic regulation of emotional reactivity induced by social (harsh facial expressions) and physical (violent scenes) threat while undergoing functional magnetic resonance imaging and providing behavioral ratings of negative emotion experience.\n\n\nSETTING\nAcademic psychology department.\n\n\nPARTICIPANTS\nFifteen adults with social anxiety disorder and 17 demographically matched healthy controls.\n\n\nMAIN OUTCOME MEASURES\nBlood oxygen level-dependent signal and negative emotion ratings.\n\n\nRESULTS\nBehaviorally, patients reported greater negative emotion than controls during social and physical threat but showed equivalent reduction in negative emotion following cognitive regulation. Neurally, viewing social threat resulted in greater emotion-related neural responses in patients than controls, with social anxiety symptom severity related to activity in a network of emotion- and attention-processing regions in patients only. Viewing physical threat produced no between-group differences. Regulation during social threat resulted in greater cognitive and attention regulation-related brain activation in controls compared with patients. Regulation during physical threat produced greater cognitive control-related response (ie, right dorsolateral prefrontal cortex) in patients compared with controls.\n\n\nCONCLUSIONS\nCompared with controls, patients demonstrated exaggerated negative emotion reactivity and reduced cognitive regulation-related neural activation, specifically for social threat stimuli. These findings help to elucidate potential neural mechanisms of emotion regulation that might serve as biomarkers for interventions for social anxiety disorder.",
"title": ""
},
{
"docid": "b13c9597f8de229fb7fec3e23c0694d1",
"text": "Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundreds of other laboratories using MTurk. Our estimate is based on a sample of 114,460 completed sessions from 33,408 unique participants and 689 sessions across seven laboratories in the US, Europe, and Australia from January 2012 to March 2015.",
"title": ""
},
{
"docid": "dc33d2edcfb124af607bcb817589f6e9",
"text": "In this letter, a novel coaxial line to substrate integrated waveguide (SIW) broadband transition is presented. The transition is designed by connecting the inner conductor of a coaxial line to an open-circuited SIW. The configuration directly transforms the TEM mode of a coaxial line to the fundamental TE10 mode of the SIW. A prototype back-to-back transition is fabricated for X-band operation using a 0.508 mm thick RO 4003C substrate with dielectric constant 3.55. Comparison with other reported transitions shows that the present structure provides lower passband insertion loss, wider bandwidth and most compact. The area of each transition is 0.08λg2 where λg is the guided wavelength at passband center frequency of f0 = 10.5 GHz. Measured 15 dB and 20 dB matching bandwidths are over 48% and 20%, respectively, at f0.",
"title": ""
},
{
"docid": "a4e6b629ec4b0fdf8784ba5be1a62260",
"text": "Today's real-world databases typically contain millions of items with many thousands of fields. As a result, traditional distribution-based outlier detection techniques have more and more restricted capabilities and novel k-nearest neighbors based approaches have become more and more popular. However, the problems with these k-nearest neighbors rankings for top n outliers, are very computationally expensive for large datasets, and doubts exist in general whether they would work well for high dimensional datasets. To partially circumvent these problems, we propose in this paper a new global outlier factor and a new local outlier factor and an efficient outlier detection algorithm developed upon them that is easy to implement and can provide competing performances with existing solutions. Experiments performed on both synthetic and real data sets demonstrate the efficacy of our method. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "494d720d5a8c7c58b795c5c6131fa8d1",
"text": "The increasing emergence of pervasive information systems requires a clearer understanding of the underlying characteristics in relation to user acceptance. Based on the integration of UTAUT2 and three pervasiveness constructs, we derived a comprehensive research model to account for pervasive information systems. Data collected from 346 participants in an online survey was analyzed to test the developed model using structural equation modeling and taking into account multigroup analysis. The results confirm the applicability of the integrated UTAUT2 model to measure pervasiveness. Implications for research and practice are discussed together with future research opportunities.",
"title": ""
},
{
"docid": "d94d49cde6878e0841c1654090062559",
"text": "In previous work we described a method for compactly representing graphs with small separators, which makes use of small separators, and presented preliminary experimental results. In this paper we extend the experimental results in several ways, including extensions for dynamic insertion and deletion of edges, a comparison of a variety of coding schemes, and an implementation of two applications using the representation. The results show that the representation is quite effective for a wide variety of real-world graphs, including graphs from finite-element meshes, circuits, street maps, router connectivity, and web links. In addition to significantly reducing the memory requirements, our implementation of the representation is faster than standard representations for queries. The byte codes we introduce lead to DFT times that are a factor of 2.5 faster than our previous results with gamma codes and a factor of between 1 and 9 faster than adjacency lists, while using a factor of between 3 and 6 less space.",
"title": ""
},
{
"docid": "0e45e57b4e799ebf7e8b55feded7e9e1",
"text": "IMPORTANCE\nIt is increasingly evident that Parkinson disease (PD) is not a single entity but rather a heterogeneous neurodegenerative disorder.\n\n\nOBJECTIVE\nTo evaluate available evidence, based on findings from clinical, imaging, genetic and pathologic studies, supporting the differentiation of PD into subtypes.\n\n\nEVIDENCE REVIEW\nWe performed a systematic review of articles cited in PubMed between 1980 and 2013 using the following search terms: Parkinson disease, parkinsonism, tremor, postural instability and gait difficulty, and Parkinson disease subtypes. The final reference list was generated on the basis of originality and relevance to the broad scope of this review.\n\n\nFINDINGS\nSeveral subtypes, such as tremor-dominant PD and postural instability gait difficulty form of PD, have been found to cluster together. Other subtypes also have been identified, but validation by subtype-specific biomarkers is still lacking.\n\n\nCONCLUSIONS AND RELEVANCE\nSeveral PD subtypes have been identified, but the pathogenic mechanisms underlying the observed clinicopathologic heterogeneity in PD are still not well understood. Further research into subtype-specific diagnostic and prognostic biomarkers may provide insights into mechanisms of neurodegeneration and improve epidemiologic and therapeutic clinical trial designs.",
"title": ""
},
{
"docid": "0218c583a8658a960085ddf813f38dbf",
"text": "The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.",
"title": ""
},
{
"docid": "1b5fc0a7b39bedcac9bdc52584fb8a22",
"text": "Neem (Azadirachta indica) is a medicinal plant of containing diverse chemical active substances of several biological properties. So, the aim of the current investigation was to assess the effects of water leaf extract of neem plant on the survival and healthy status of Nile tilapia (Oreochromis niloticus), African cat fish (Clarias gariepinus) and zooplankton community. The laboratory determinations of lethal concentrations (LC 100 and LC50) through a static bioassay test were performed. The 24 h LC100 of neem leaf extract was estimated as 4 and 11 g/l, for juvenile's O. niloticus and C. gariepinus, respectively, while, the 96-h LC50 was 1.8 and 4 g/l, respectively. On the other hand, the 24 h LC100 for cladocera and copepoda were 0.25 and 0.45 g/l, respectively, while, the 96-h LC50 was 0.1 and 0.2 g/l, respectively. At the highest test concentrations, adverse effects were obvious with significant reductions in several cladoceran and copepod species. Some alterations in glucose levels, total protein, albumin, globulin as well as AST and ALT in plasma of treated O. niloticus and C. gariepinus with /2 and /10 LC50 of neem leaf water extract compared with non-treated one after 2 and 7 days of exposure were recorded and discussed. It could be concluded that the application of neem leaf extract can be used to control unwanted organisms in ponds as environment friendly material instead of deleterious pesticides. Also, extensive investigations should be established for the suitable methods of application in aquatic animal production facilities to be fully explored in future.",
"title": ""
},
{
"docid": "cd4e2e3af17cd84d4ede35807e71e783",
"text": "A proposal for saliency computation within the visual cortex is put forth based on the premise that localized saliency computation serves to maximize information sampled from one's environment. The model is built entirely on computational constraints but nevertheless results in an architecture with cells and connectivity reminiscent of that appearing in the visual cortex. It is demonstrated that a variety of visual search behaviors appear as emergent properties of the model and therefore basic principles of coding and information transmission. Experimental results demonstrate greater efficacy in predicting fixation patterns across two different data sets as compared with competing models.",
"title": ""
},
{
"docid": "f73cd33c8dfc9791558b239aede6235b",
"text": "Web clustering engines organize search results by topic, thus offering a complementary view to the flat-ranked list returned by conventional search engines. In this survey, we discuss the issues that must be addressed in the development of a Web clustering engine, including acquisition and preprocessing of search results, their clustering and visualization. Search results clustering, the core of the system, has specific requirements that cannot be addressed by classical clustering algorithms. We emphasize the role played by the quality of the cluster labels as opposed to optimizing only the clustering structure. We highlight the main characteristics of a number of existing Web clustering engines and also discuss how to evaluate their retrieval performance. Some directions for future research are finally presented.",
"title": ""
},
{
"docid": "4dba2a9a29f58b55a6b2c3101acf2437",
"text": "Clinical and neurobiological findings have reported the involvement of endocannabinoid signaling in the pathophysiology of schizophrenia. This system modulates dopaminergic and glutamatergic neurotransmission that is associated with positive, negative, and cognitive symptoms of schizophrenia. Despite neurotransmitter impairments, increasing evidence points to a role of glial cells in schizophrenia pathobiology. Glial cells encompass three main groups: oligodendrocytes, microglia, and astrocytes. These cells promote several neurobiological functions, such as myelination of axons, metabolic and structural support, and immune response in the central nervous system. Impairments in glial cells lead to disruptions in communication and in the homeostasis of neurons that play role in pathobiology of disorders such as schizophrenia. Therefore, data suggest that glial cells may be a potential pharmacological tool to treat schizophrenia and other brain disorders. In this regard, glial cells express cannabinoid receptors and synthesize endocannabinoids, and cannabinoid drugs affect some functions of these cells that can be implicated in schizophrenia pathobiology. Thus, the aim of this review is to provide data about the glial changes observed in schizophrenia, and how cannabinoids could modulate these alterations.",
"title": ""
},
{
"docid": "e2807120a8a04a9c5f5f221e413aec4d",
"text": "Background A military aircraft in a hostile environment may need to use radar jamming in order to avoid being detected or engaged by the enemy. Effective jamming can require knowledge of the number and type of enemy radars; however, the radar receiver on the aircraft will observe a single stream of pulses from all radar emitters combined. It is advantageous to separate this collection of pulses into individual streams each corresponding to a particular emitter in the environment; this process is known as pulse deinterleaving. Pulse deinterleaving is critical for effective electronic warfare (EW) signal processing such as electronic attack (EA) and electronic protection (EP) because it not only aids in the identification of enemy radars but also permits the intelligent allocation of processing resources.",
"title": ""
},
{
"docid": "6a470404c36867a18a98fafa9df6848f",
"text": "Memory links use variable-impedance drivers, feed-forward equalization (FFE) [1], on-die termination (ODT) and slew-rate control to optimize the signal integrity (SI). An asymmetric DRAM link configuration exploits the availability of a fast CMOS technology on the memory controller side to implement powerful equalization, while keeping the circuit complexity on the DRAM side relatively simple. This paper proposes the use of Tomlinson Harashima precoding (THP) [2-4] in a memory controller as replacement of the afore-mentioned SI optimization techniques. THP is a transmitter equalization technique in which post-cursor inter-symbol interference (ISI) is cancelled by means of an infinite impulse response (IIR) filter with modulo-based amplitude limitation; similar to a decision feedback equalizer (DFE) on the receive side. However, in contrast to a DFE, THP does not suffer from error propagation.",
"title": ""
},
{
"docid": "570e48e839bd2250473d4332adf2b53f",
"text": "Autologous stem cell transplant can be a curative therapy to restore normal hematopoiesis after myeloablative treatments in patients with malignancies. Aim: To evaluate the effect of rehabilitation program for caregivers about patients’ post autologous bone marrow transplantation Research Design: A quasi-experimental design was used. Setting: The study was conducted in Sheikh Zayed Specialized Hospital at Oncology Outpatient Clinic of Bone Marrow Transplantation Unit. Sample: A purposive sample comprised; a total number of 60 patients, their age ranged from 21 to 50 years, free from any other chronic disease and the caregivers are living with the patients in the same home. Tools: Two tools were used for data collection. First tool: An interviewing autologous bone marrow transplantation questionnaire for the patients and their caregivers was divided into five parts; Including: Socio-demographic data, knowledge of caregivers regarding autologous bone marrow transplant and side effect of chemotherapy, family caregivers’ practices according to their providing care related to post bone marrow transplantation, signs and symptoms, activities of daily living for patients and home environmental sanitation for the patients. Second tool: deals with physical examination assessment of the patients from head to toe. Results: 61.7% of patients aged 30˂40 years, and 68.3 % were female. Regarding the type of relationship with the patients, 48.3% were the mother, 58.3% of patients who underwent autologous bone marrow transplantation had a sanitary environment and there were highly statistically significant differences between caregivers’ knowledge and practices pre/post program. Conclusion: There were highly statistically significant differences between family caregivers' total knowledge, their practices, as well as their total caregivers’ knowledge, practices and patients’ independency level pre/post rehabilitation program. . Recommendations: Counseling for family caregivers of patients who underwent autologous bone marrow transplantation and carrying out rehabilitation program for the patients and their caregivers to be performed properly during the rehabilitation period at caner hospitals such as 57357 Hospital and The National Cancer Institute in Cairo.",
"title": ""
}
] |
scidocsrr
|
64e566a86f2457c5e96b36d9a0b95d42
|
Data Curation at Scale: The Data Tamer System
|
[
{
"docid": "a15f80b0a0ce17ec03fa58c33c57d251",
"text": "The World-Wide Web consists of a huge number of unstructured documents, but it also contains structured data in the form of HTML tables. We extracted 14.1 billion HTML tables from Google’s general-purpose web crawl, and used statistical classification techniques to find the estimated 154M that contain high-quality relational data. Because each relational table has its own “schema” of labeled and typed columns, each such table can be considered a small structured database. The resulting corpus of databases is larger than any other corpus we are aware of, by at least five orders of magnitude. We describe the WebTables system to explore two fundamental questions about this collection of databases. First, what are effective techniques for searching for structured data at search-engine scales? Second, what additional power can be derived by analyzing such a huge corpus? First, we develop new techniques for keyword search over a corpus of tables, and show that they can achieve substantially higher relevance than solutions based on a traditional search engine. Second, we introduce a new object derived from the database corpus: the attribute correlation statistics database (AcsDB) that records corpus-wide statistics on cooccurrences of schema elements. In addition to improving search relevance, the AcsDB makes possible several novel applications: schema auto-complete, which helps a database designer to choose schema elements; attribute synonym finding, which automatically computes attribute synonym pairs for schema matching; and join-graph traversal, which allows a user to navigate between extracted schemas using automatically-generated join links. ∗Work done while all authors were at Google, Inc. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commer cial advantage, the VLDB copyright notice and the title of the publication an d its date appear, and notice is given that copying is by permission of the Very L arge Data Base Endowment. To copy otherwise, or to republish, to post o n servers or to redistribute to lists, requires a fee and/or special pe rmission from the publisher, ACM. VLDB ’08 Auckland, New Zealand Copyright 2008 VLDB Endowment, ACM 000-0-00000-000-0/00/ 00.",
"title": ""
}
] |
[
{
"docid": "0fba05a38cb601a1b08e6105e6b949c1",
"text": "This paper discusses how to implement Paillier homomorphic encryption (HE) scheme in Java as an API. We first analyze existing Pailler HE libraries and discuss their limitations. We then design a comparatively accomplished and efficient Pailler HE Java library. As a proof of concept, we applied our Pailler HE library in an electronic voting system that allows the voting server to sum up the candidates' votes in the encrypted form with voters remain anonymous. Our library records an average of only 2766ms for each vote placement through HTTP POST request.",
"title": ""
},
{
"docid": "488f907d1c33e9a82aafd7b9a28dae24",
"text": "SPECIAL ISSUE Managing Traumatic Brain Injury: Appropriate Assessment and a Rationale for Using Neurofeedback and Biofeedback to Enhance Recovery in Postconcussion Syndrome Michael Thompson, MD, Lynda Thompson, PhD, CPsych, BCN, Andrea Reid-Chung, MA, CCC, BCN, and James Thompson, PhD, BCN ADD Centre/Biofeedback Institute of Toronto, Mississauga, Ontario, Canada; Evoke Neuroscience, Inc., New York, NY",
"title": ""
},
{
"docid": "67af0ebeebec40efa792a010ce205890",
"text": "We present a near-optimal polynomial-time approximation algorithm for the asymmetric traveling salesman problem for graphs of bounded orientable or non-orientable genus. Given any algorithm that achieves an approximation ratio of f(n) on arbitrary n-vertex graphs as a black box, our algorithm achieves an approximation factor of O(f(g)) on graphs with genus g. In particular, the O(log n/loglog n)-approximation algorithm for general graphs by Asadpour et al. [SODA 2010] immediately implies an O(log g/loglog g)-approximation algorithm for genus-g graphs. Moreover, recent results on approximating the genus of graphs imply that our O(log g/loglog g)-approximation algorithm can be applied to bounded-degree graphs even if no genus-g embedding of the graph is given. Our result improves and generalizes the o(√ g log g)-approximation algorithm of Oveis Gharan and Saberi [SODA 2011], which applies only to graphs with orientable genus g and requires a genus-g embedding as part of the input, even for bounded-degree graphs. Finally, our techniques yield a O(1)-approximation algorithm for ATSP on graphs of genus g with running time 2O(g) · nO(1).",
"title": ""
},
{
"docid": "5dd8a03ed05440ca1f42c2e2920069a1",
"text": "This paper introduces the capacitive bulk acoustic wave (BAW) silicon disk gyroscope. The capacitive BAW disk gyroscopes operate in the frequency range of 2-8MHz, are stationary devices with vibration amplitudes less than 20nm, and achieve very high quality factors (Q) in low vacuum (and even in atmosphere), which simplifies their wafer-scale packaging. The device has lower operating voltages compared to low-frequency gyroscopes, which simplifies the interface circuit design and implementation in standard CMOS",
"title": ""
},
{
"docid": "df10984391cfb52e8ece9ae3766754c1",
"text": "A major challenge that arises in Weakly Supervised Object Detection (WSOD) is that only image-level labels are available, whereas WSOD trains instance-level object detectors. A typical approach to WSOD is to 1) generate a series of region proposals for each image and assign the image-level label to all the proposals in that image; 2) train a classifier using all the proposals; and 3) use the classifier to select proposals with high confidence scores as the positive instances for another round of training. In this way, the image-level labels are iteratively transferred to instance-level labels.\n We aim to resolve the following two fundamental problems within this paradigm. First, existing proposal generation algorithms are not yet robust, thus the object proposals are often inaccurate. Second, the selected positive instances are sometimes noisy and unreliable, which hinders the training at subsequent iterations. We adopt two separate neural networks, one to focus on each problem, to better utilize the specific characteristic of region proposal refinement and positive instance selection. Further, to leverage the mutual benefits of the two tasks, the two neural networks are jointly trained and reinforced iteratively in a progressive manner, starting with easy and reliable instances and then gradually incorporating difficult ones at a later stage when the selection classifier is more robust. Extensive experiments on the PASCAL VOC dataset show that our method achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "fe34fcd09a10c382596cffcd13f17a3c",
"text": "As Granular Computing has gained interest, more research has lead into using different representations for Information Granules, i.e., rough sets, intervals, quotient space, fuzzy sets; where each representation offers different approaches to information granulation. These different representations have given more flexibility to what information granulation can achieve. In this overview paper, the focus is only on journal papers where Granular Computing is studied when fuzzy logic systems are used, covering research done with Type-1 Fuzzy Logic Systems, Interval Type-2 Fuzzy Logic Systems, as well as the usage of general concepts of Fuzzy Systems.",
"title": ""
},
{
"docid": "d93abfdc3bc20a23e533f3ad2e30b9c9",
"text": "Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.",
"title": ""
},
{
"docid": "69789348f1e05c5de8b68ba7dbaa2c73",
"text": "For hands-free man-machine audio interfaces with multi-channel sound reproduction and automatic speech recognition (ASR), both a multi-channel acoustic echo canceller (M-C AEC) and a beamforming microphone array are necessary for sufficient recognition rates. Based on known strategies for combining single-channel AEC and adaptive beamforming microphone arrays, we discuss special aspects for the extension to multi-channel AEC and propose an efficient system that can be implemented on a regular PC.",
"title": ""
},
{
"docid": "2bf619a1af1bab48b4b6f57df8f29598",
"text": "Alcoholism and drug addiction have marked impacts on the ability of families to function. Much of the literature has been focused on adult members of a family who present with substance dependency. There is limited research into the effects of adolescent substance dependence on parenting and family functioning; little attention has been paid to the parents' experience. This qualitative study looks at the parental perspective as they attempted to adapt and cope with substance dependency in their teenage children. The research looks into family life and adds to family functioning knowledge when the identified client is a youth as opposed to an adult family member. Thirty-one adult caregivers of 21 teenagers were interviewed, resulting in eight significant themes: (1) finding out about the substance dependence problem; (2) experiences as the problems escalated; (3) looking for explanations other than substance dependence; (4) connecting to the parent's own history; (5) trying to cope; (6) challenges of getting help; (7) impact on siblings; and (8) choosing long-term rehabilitation. Implications of this research for clinical practice are discussed.",
"title": ""
},
{
"docid": "d733f07d3b022ad8a7020c05292bcddd",
"text": "In Chapter 9 we discussed quality management models with examples of in-process metrics and reports. The models cover both the front-end design and coding activities and the back-end testing phases of development. The focus of the in-process data and reports, however, are geared toward the design review and code inspection data, although testing data is included. This chapter provides a more detailed discussion of the in-process metrics from the testing perspective. 1 These metrics have been used in the IBM Rochester software development laboratory for some years with continual evolution and improvement, so there is ample implementation experience with them. This is important because although there are numerous metrics for software testing, and new ones being proposed frequently, relatively few are supported by sufficient experiences of industry implementation to demonstrate their usefulness. For each metric, we discuss its purpose, data, interpretation , and use, and provide a graphic example based on real-life data. Then we discuss in-process quality management vis-à-vis these metrics and revisit the metrics 271 1. This chapter is a modified version of a white paper written for the IBM corporate-wide Software Test Community Leaders (STCL) group, which was published as \" In-process Metrics for Software Testing, \" in",
"title": ""
},
{
"docid": "57459aad1eac1c6151dc7a101042f23e",
"text": "Modern phenotyping and plant disease detection provide promising step towards food security and sustainable agriculture. In particular, imaging and computer vision based phenotyping offers the ability to study quantitative plant physiology. On the contrary, manual interpretation requires tremendous amount of work, expertise in plant diseases, and also requires excessive processing time. In this work, we present an approach that integrates image processing and machine learning to allow diagnosing diseases from leaf images. This automated method classifies diseases (or absence thereof) on potato plants from a publicly available plant image database called ‘Plant Village’. Our segmentation approach and utilization of support vector machine demonstrate disease classification over 300 images with an accuracy of 95%. Thus, the proposed approach presents a path toward automated plant diseases diagnosis on a massive scale.",
"title": ""
},
{
"docid": "d483da5197688c5deede276b63d81867",
"text": "We present a stochastic model of the daily operations of an airline. Its primary purpose is to evaluate plans, such as crew schedules, as well as recovery policies in a random environment. We describe the structure of the stochastic model, sources of disruptions, recovery policies, and performance measures. Then, we describe SimAir—our simulation implementation of the stochastic model, and we give computational results. Finally, we give future directions for the study of airline recovery policies and planning under uncertainty.",
"title": ""
},
{
"docid": "4cc52c8b6065d66472955dff9200b71f",
"text": "Over the past few years there has been an increasing focus on the development of features for resource management within the Linux kernel. The addition of the fair group scheduler has enabled the provisioning of proportional CPU time through the specification of group weights. Since the scheduler is inherently workconserving in nature, a task or a group can consume excess CPU share in an otherwise idle system. There are many scenarios where this extra CPU share can cause unacceptable utilization or latency. CPU bandwidth provisioning or limiting approaches this problem by providing an explicit upper bound on usage in addition to the lower bound already provided by shares. There are many enterprise scenarios where this functionality is useful. In particular are the cases of payper-use environments, and latency provisioning within non-homogeneous environments. This paper details the requirements behind this feature, the challenges involved in incorporating into CFS (Completely Fair Scheduler), and the future development road map for this feature. 1 CPU as a manageable resource Before considering the aspect of bandwidth provisioning let us first review some of the basic existing concepts currently arbitrating entity management within the scheduler. There are two major scheduling classes within the Linux CPU scheduler, SCHED_RT and SCHED_NORMAL. When runnable, entities from the former, the real-time scheduling class, will always be elected to run over those from the normal scheduling class. Prior to v2.6.24, the scheduler had no notion of any entity larger than that of single task1. The available management APIs reflected this and the primary control of bandwidth available was nice(2). In v2.6.24, the completely fair scheduler (CFS) was merged, replacing the existing SCHED_NORMAL scheduling class. This new design delivered weight based scheduling of CPU bandwidth, enabling arbitrary partitioning. This allowed support for group scheduling to be added, managed using cgroups through the CPU controller sub-system. This support allows for the flexible creation of scheduling groups, allowing the fraction of CPU resources received by a group of tasks to be arbitrated as a whole. The addition of this support has been a major step in scheduler development, enabling Linux to align more closely with enterprise requirements for managing this resouce. The hierarchies supported by this model are flexible, and groups may be nested within groups. Each group entity’s bandwidth is provisioned using a corresponding shares attribute which defines its weight. Similarly, the nice(2) API was subsumed to control the weight of an individual task entity. Figure 1 shows the hierarchical groups that might be created in a typical university server to differentiate CPU bandwidth between users such as professors, students, and different departments. One way to think about shares is that it provides lowerbound provisioning. When CPU bandwidth is scheduled at capacity, all runnable entities will receive bandwidth in accordance with the ratio of their share weight. It’s key to observe here that not all entities may be runnable 1Recall that under Linux any kernel-backed thread is considered individual task entity, there is no typical notion of a process in scheduling context.",
"title": ""
},
{
"docid": "c549c95c71d0e0514ac2a5238a2fba54",
"text": "Apple's Touch ID serves as an alternative to PIN/password for unlocking Apple devices, signing into third party iOS applications, and authorizing purchases on the iTunes Store by simply tapping a registered finger on the home button.\n This paper investigates the user comprehension and risk perception of Apple's Touch ID technology. We conducted two user studies to assess user perceptions in three domains: 1) Touch ID authentication process for third party applications; 2) fingerprint access and storage; and 3) ease of circumventing Touch ID. We first conducted an in-person study with 30 participants and then validated our findings through an online study over Amazon Turk on a larger sample of 125 participants. Our findings show that Touch ID users are unaware of the Touch ID authentication process for signing into applications, and have incorrect perceptions regarding the storage/access of their registered fingerprint before and after Touch ID authentication.",
"title": ""
},
{
"docid": "1d661e5fca07683a3595149502de3a1b",
"text": "Introduction Azo dyes – a fatal threat or a new generation of drugs? In 1858 Griss developed the azo coupling reaction and obtained first azo dye – Aniline Yellow. Nowadays, around 10 thousands of these compounds are described and more than 2 thousands are applied to color various materials. Azo dyes are characterized by the presence of the azo moiety (–N=N–) in their structure, conjugated with two, distinct or identical, monoor polycyclic aromatic systems. Because of their specific physico-chemical properties and biological activities, they have found a broad application in pharmaceutical, cosmetic, food, dyeing/textile industry and analytical chemistry. However, the most typical and popular field of utility remains their coloring function. Azo dyes are the largest and the most versatile class of dyes. They possess intense bright colors, in particular oranges, reds and yellows. In addition, azo dyes exhibit a variety of interesting biological activities. Medical importance of these compounds is well known for their antibiotic, antifungal and anti-HIV properties. On the other hand they bring a certain danger for health and environment because of canceroand mutagenicity. In this review, selected synthetic strategies and biological activities of azo dyes are presented, the latter in the context of a therapeutic potential and a hazard connected with their production and application.",
"title": ""
},
{
"docid": "36d0c6ba49223becc0e28c4b197b17a3",
"text": "Wastewater treatment plants (WWTPs) have been identified as potential sources of antibiotic resistance genes (ARGs) but the effects of tertiary wastewater treatment processes on ARGs have not been well characterized. Therefore, the objective of this study was to determine the fate of ARGs throughout a tertiary-stage WWTP. Two ARGs, sul1 and bla, were quantified via quantitative polymerase chain reaction (qPCR) in solids and dissolved fractions of raw sewage, activated sludge, secondary effluent and tertiary effluent from a full-scale WWTP. Tertiary media filtration and chlorine disinfection were studied further with the use of a pilot-scale media filter. Results showed that both genes were reduced at each successive stage of treatment in the dissolved fraction. The solids-associated ARGs increased during activated sludge stage and were reduced in each subsequent stage. Overall reductions were approximately four log10 with the tertiary media filtration and disinfection providing the largest decrease. The majority of ARGs were solids-associated except for in the tertiary effluent. There was no evidence for positive selection of ARGs during treatment. The removal of ARGs by chlorine was improved by filtration compared to unfiltered, chlorinated secondary effluent. This study demonstrates that tertiary-stage WWTPs with disinfection can provide superior removal of ARGs compared to secondary treatment alone.",
"title": ""
},
{
"docid": "5dbd99fa88cacc944874f2729cd3e4a1",
"text": "This paper presents a fast algorithm for deriving the defocus map from a single image. Existing methods of defocus map estimation often include a pixel-level propagation step to spread the measured sparse defocus cues over the whole image. Since the pixel-level propagation step is time-consuming, we develop an effective method to obtain the whole-image defocus blur using oversegmentation and transductive inference. Oversegmentation produces the superpixels and hence greatly reduces the computation costs for subsequent procedures. Transductive inference provides a way to calculate the similarity between superpixels, and thus helps to infer the defocus blur of each superpixel from all other superpixels. The experimental results show that our method is efficient and able to estimate a plausible superpixel-level defocus map from a given single image.",
"title": ""
},
{
"docid": "133d03375f93505679bb005d3ec9ed87",
"text": "Speech emotion recognition as a significant part has become a challenge to artificial emotion. It is particularly difficult to recognize emotion independent of the person concentrating on the speech channel. In the paper, an integrated system of hidden Markov model (HMM) and support vector machine (SVM), which combining advantages on capability to dynamic time warping of HMM and pattern recognition of SVM, has been proposed to implement speaker independent emotion classification. Firstly, all emotions are divided into two groups by SVM. Then, HMMs are used to discriminate emotions from each group. For a more robust estimation, we also combine four HMMs classifiers into a system. The recognition result of the fusion system has been compared with the isolated HMMs using Mandarin database. Experimental results demonstrate that comparing with the method based on only HMMs, the proposed system is more effective and the average recognition rate reaches 76.1% when speaker is independent.",
"title": ""
},
{
"docid": "572ae23dd73dfb0a7cbc04d05772528f",
"text": "Machine learning models with very low test error have been shown to be consistently vulnerable to small, adversarially chosen perturbations of the input. We hypothesize that this counterintuitive behavior is a result of the high-dimensional geometry of the data manifold, and explore this hypothesis on a simple highdimensional dataset. For this dataset we show a fundamental bound relating the classification error rate to the average distance to the nearest misclassification, which is independent of the model. We train different neural network architectures on this dataset and show their error sets approach this theoretical bound. As a result of the theory, the vulnerability of machine learning models to small adversarial perturbations is a logical consequence of the amount of test error observed. We hope that our theoretical analysis of this foundational synthetic case will point a way forward to explore how the geometry of complex real-world data sets leads to adversarial examples.",
"title": ""
},
{
"docid": "4f74d7e1d7d8a98f0228e0c87c0d85d8",
"text": "This paper proposes a novel method for multivehicle detection and tracking using a vehicle-mounted monocular camera. In the proposed method, the features of vehicles are learned as a deformable object model through the combination of a latent support vector machine (LSVM) and histograms of oriented gradients (HOGs). The detection algorithm combines both global and local features of the vehicle as a deformable object model. Detected vehicles are tracked through a particle filter, which estimates the particles' likelihood by using a detection scores map and template compatibility for both root and parts of the vehicle while considering the deformation cost caused by the movement of vehicle parts. Tracking likelihoods are iteratively used as a priori probability to generate vehicle hypothesis regions and update the detection threshold to reduce false negatives of the algorithm presented before. Extensive experiments in urban scenarios showed that the proposed method can achieve an average vehicle detection rate of 97% and an average vehicle-tracking rate of 86% with a false positive rate of less than 0.26%.",
"title": ""
}
] |
scidocsrr
|
e6674d238c80fcc220fe308f8f14ff82
|
ArtGAN: Artwork synthesis with conditional categorical GANs
|
[
{
"docid": "e2009f56982f709671dcfe43048a8919",
"text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.",
"title": ""
},
{
"docid": "8b498cfaa07f0b2858e417e0e0d5adb4",
"text": "In classic pattern recognition problems, classes are mutually exclusive by de\"nition. Classi\"cation errors occur when the classes overlap in the feature space. We examine a di5erent situation, occurring when the classes are, by de\"nition, not mutually exclusive. Such problems arise in semantic scene and document classi\"cation and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classi\"cation, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a \"eld scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a di5erent treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classi\"cation; furthermore, our work appears to generalize to other classi\"cation problems of the same nature. ? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "35293c16985878fca24b5a327fd52c72",
"text": "In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method – which we dub categorical generative adversarial networks (or CatGAN) – on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM).",
"title": ""
}
] |
[
{
"docid": "dde53711d1322976cffcfb9a82561a4c",
"text": "Recently, attempts have been made to collect millions of videos to train CNN models for action recognition in videos. However, curating such large-scale video datasets requires immense human labor, and training CNNs on millions of videos demands huge computational resources. In contrast, collecting action images from the Web is much easier and training on images requires much less computation. In addition, labeled web images tend to contain discriminative action poses, which highlight discriminative portions of a video’s temporal progression. We explore the question of whether we can utilize web action images to train better CNN models for action recognition in videos. We collect 23.8K manually filtered images from the Web that depict the 101 actions in the UCF101 action video dataset. We show that by utilizing web action images along with videos in training, significant performance boosts of CNN models can be achieved. We then investigate the scalability of the process by leveraging crawled web images (unfiltered) for UCF101 and ActivityNet. We replace 16.2M video frames by 393K unfiltered images and get comparable performance.",
"title": ""
},
{
"docid": "6850b52405e8056710f4b3010858cfbe",
"text": "spread of misinformation, rumors and hoaxes. The goal of this work is to introduce a simple modeling framework to study the diffusion of hoaxes and in particular how the availability of debunking information may contain their diffusion. As traditionally done in the mathematical modeling of information diffusion processes, we regard hoaxes as viruses: users can become infected if they are exposed to them, and turn into spreaders as a consequence. Upon verification, users can also turn into non-believers and spread the same attitude with a mechanism analogous to that of the hoax-spreaders. Both believers and non-believers, as time passes, can return to a susceptible state. Our model is characterized by four parameters: spreading rate, gullibility, probability to verify a hoax, and that to forget one's current belief. Simulations on homogeneous, heterogeneous, and real networks for a wide range of parameters values reveal a threshold for the fact-checking probability that guarantees the complete removal of the hoax from the network. Via a mean field approximation, we establish that the threshold value does not depend on the spreading rate but only on the gullibility and forgetting probability. Our approach allows to quantitatively gauge the minimal reaction necessary to eradicate a hoax.",
"title": ""
},
{
"docid": "4a3dad9ae334366e8db63e34f1fc6a2a",
"text": "The Encyclopedia of Data Warehousing and Mining, Second Edition offers thorough exposure to the issues of importance in the rapidly changing field of data warehousing and mining. This essential reference source informs decision makers, problem solvers, and data mining specialists in business, academia, government, and other settings with over 300 entries on theories, methodologies, functionalities, and applications.",
"title": ""
},
{
"docid": "374e5a4ad900a6f31e4083bef5c08ca4",
"text": "Procedural modeling deals with (semi-)automatic content generation by means of a program or procedure. Among other advantages, its data compression and the potential to generate a large variety of detailed content with reduced human intervention, have made procedural modeling attractive for creating virtual environments increasingly used in movies, games, and simulations. We survey procedural methods that are useful to generate features of virtual worlds, including terrains, vegetation, rivers, roads, buildings, and entire cities. In this survey, we focus particularly on the degree of intuitive control and of interactivity offered by each procedural method, because these properties are instrumental for their typical users: designers and artists. We identify the most promising research results that have been recently achieved, but we also realize that there is far from widespread acceptance of procedural methods among non-technical, creative professionals. We conclude by discussing some of the most important challenges of procedural modeling.",
"title": ""
},
{
"docid": "eee51fc5cd3bee512b01193fa396e19a",
"text": "Croston’s method is a widely used to predict inventory demand when it is inter mittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston’s method and three related methods, and we show that any underlying model will be inconsistent with the prop erties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. [JEL: C53, C22, C51]",
"title": ""
},
{
"docid": "155c692223bf8698278023c04e07f135",
"text": "Structure-function studies with mammalian reoviruses have been limited by the lack of a reverse-genetic system for engineering mutations into the viral genome. To circumvent this limitation in a partial way for the major outer-capsid protein sigma3, we obtained in vitro assembly of large numbers of virion-like particles by binding baculovirus-expressed sigma3 protein to infectious subvirion particles (ISVPs) that lack sigma3. A level of sigma3 binding approaching 100% of that in native virions was routinely achieved. The sigma3 coat in these recoated ISVPs (rcISVPs) appeared very similar to that in virions by electron microscopy and three-dimensional image reconstruction. rcISVPs retained full infectivity in murine L cells, allowing their use to study sigma3 functions in virus entry. Upon infection, rcISVPs behaved identically to virions in showing an extended lag phase prior to exponential growth and in being inhibited from entering cells by either the weak base NH4Cl or the cysteine proteinase inhibitor E-64. rcISVPs also mimicked virions in being incapable of in vitro activation to mediate lysis of erythrocytes and transcription of the viral mRNAs. Last, rcISVPs behaved like virions in showing minor loss of infectivity at 52 degrees C. Since rcISVPs contain virion-like levels of sigma3 but contain outer-capsid protein mu1/mu1C mostly cleaved at the delta-phi junction as in ISVPs, the fact that rcISVPs behaved like virions (and not ISVPs) in all of the assays that we performed suggests that sigma3, and not the delta-phi cleavage of mu1/mu1C, determines the observed differences in behavior between virions and ISVPs. To demonstrate the applicability of rcISVPs for genetic studies of protein functions in reovirus entry (an approach that we call recoating genetics), we used chimeric sigma3 proteins to localize the primary determinants of a strain-dependent difference in sigma3 cleavage rate to a carboxy-terminal region of the ISVP-bound protein.",
"title": ""
},
{
"docid": "11c245ca7bc133155ff761374dfdea6e",
"text": "Received Nov 12, 2017 Revised Jan 20, 2018 Accepted Feb 11, 2018 In this paper, a modification of PVD (Pixel Value Differencing) algorithm is used for Image Steganography in spatial domain. It is normalizing secret data value by encoding method to make the new pixel edge difference less among three neighbors (horizontal, vertical and diagonal) and embedding data only to less intensity pixel difference areas or regions. The proposed algorithm shows a good improvement for both color and gray-scale images compared to other algorithms. Color images performances are better than gray images. However, in this work the focus is mainly on gray images. The strenght of this scheme is that any random hidden/secret data do not make any shuttle differences to Steg-image compared to original image. The bit plane slicing is used to analyze the maximum payload that has been embeded into the cover image securely. The simulation results show that the proposed algorithm is performing better and showing great consistent results for PSNR, MSE values of any images, also against Steganalysis attack.",
"title": ""
},
{
"docid": "46d20d0330aaaf22418c53c715d78631",
"text": "s, Cochrane Central Register of Controlled Trials and Database of Systemic Reviews, Database of Abstracts of Effects, ACP Journal Club, and OTseeker. Experts such as librarians have been used. However, there is no mention of efforts in relation to unpublished research, but abstracts of 950 articles weres of 950 articles were scanned, which appears to be a sufficient amount.",
"title": ""
},
{
"docid": "e9474d646b9da5e611475f4cdfdfc30e",
"text": "Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals.",
"title": ""
},
{
"docid": "ef068ddc1d7cd8dd26acf4fafc54254d",
"text": "In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named “few-shot object detection”. The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, easy samples are generated first and, then the poorly initialized model undergoes improvement. As the model becomes more discriminative, challenging but reliable samples are selected. After that, another round of model improvement takes place. To further improve the precision and recall of the generated training samples, we embed multiple detection models in our framework, which has proven to outperform the single model baseline and the model ensemble method. Experiments on PASCAL VOC’07 and ILSVRC’13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.",
"title": ""
},
{
"docid": "e9e7a68578f23b85bee9ebfe1b923f87",
"text": "Low-density lipoprotein (LDL) is the most abundant and the most atherogenic class of cholesterol-carrying lipoproteins in human plasma. The level of plasma LDL is regulated by the LDL receptor, a cell surface glycoprotein that removes LDL from plasma by receptor-mediated endocytosis. Defects in the gene encoding the LDL receptor, which occur in patients with familial hypercholesterolemia, elevate the plasma LDL level and produce premature coronary atherosclerosis. The physiologically important LDL receptors are located primarily in the liver, where their number is regulated by the cholesterol content of the hepatocyte. When the cholesterol content of hepatocytes is raised by ingestion of diets high in saturated fat and cholesterol, LDL receptors fall and plasma LDL levels rise. Conversely, maneuvers that lower the cholesterol content of hepatocytes, such as ingestion of drugs that inhibit cholesterol synthesis (mevinolin or compactin) or prevent the reutilization of bile acids (cholestyramine or colestipol), stimulate LDL receptor production and lower plasma LDL levels. The normal process of receptor regulation can therefore be exploited in powerful and novel ways so as to reverse hypercholesterolemia and prevent atherosclerosis.",
"title": ""
},
{
"docid": "703f0baf67a1de0dfb03b3192327c4cf",
"text": "Fleet management systems are commonly used to coordinate mobility and delivery services in a broad variety of domains. However, their traditional top-down control architecture becomes a bottleneck in open and dynamic environments, where scalability, proactiveness, and autonomy are becoming key factors for their success. Here, the authors present an abstract event-based architecture for fleet management systems that supports tailoring dynamic control regimes for coordinating fleet vehicles, and illustrate it for the case of medical emergency management. Then, they go one step ahead in the transition toward automatic or driverless fleets, by conceiving fleet management systems in terms of cyber-physical systems, and putting forward the notion of cyber fleets.",
"title": ""
},
{
"docid": "9c8b3aa5fb075c1654c1e7eb71b350a3",
"text": "Monitoring data streams in a distributed system is the focus of much research in recent years. Most of the proposed schemes, however, deal with monitoring simple aggregated values, such as the frequency of appearance of items in the streams. More involved challenges, such as the important task of feature selection (e.g., by monitoring the information gain of various features), still require very high communication overhead using naive, centralized algorithms. We present a novel geometric approach by which an arbitrary global monitoring task can be split into a set of constraints applied locally on each of the streams. The constraints are used to locally filter out data increments that do not affect the monitoring outcome, thus avoiding unnecessary communication. As a result, our approach enables monitoring of arbitrary threshold functions over distributed data streams in an efficient manner. We present experimental results on real-world data which demonstrate that our algorithms are highly scalable, and considerably reduce communication load in comparison to centralized algorithms.",
"title": ""
},
{
"docid": "3cf174505ecd647930d762327fc7feb6",
"text": "The purpose of the present study was to examine the relationship between workplace friendship and social loafing effect among employees in Certified Public Accounting (CPA) firms. Previous studies showed that workplace friendship has both positive and negative effects, meaning that there is an inconsistent relationship between workplace friendship and social loafing. The present study investigated the correlation between workplace friendship and social loafing effect among employees from CPA firms in Taiwan. The study results revealed that there was a negative relationship between workplace friendship and social loafing effect among CPA employees. In other words, the better the workplace friendship, the lower the social loafing effect. An individual would not put less effort in work when there was a low social loafing effect.",
"title": ""
},
{
"docid": "3953a1a05e064b8211fe006af4595e70",
"text": "Sentiment analysis is a common task in natural language processing that aims to detect polarity of a text document (typically a consumer review). In the simplest settings, we discriminate only between positive and negative sentiment, turning the task into a standard binary classification problem. We compare several machine learning approaches to this problem, and combine them to achieve a new state of the art. We show how to use for this task the standard generative language models, which are slightly complementary to the state of the art techniques. We achieve strong results on a well-known dataset of IMDB movie reviews. Our results are easily reproducible, as we publish also the code needed to repeat the experiments. This should simplify further advance of the state of the art, as other researchers can combine their techniques with ours with little effort.",
"title": ""
},
{
"docid": "0f48d860b9ab4527293ae53b3c3092fe",
"text": "6 Relationships between common water bacteria and pathogens in drinking-water H. Leclerc 6.1 INTRODUCTION To perform a risk analysis for pathogens in drinking-water, it is necessary, on the one hand, to promote epidemiological studies, such as prospective cohort and case–control studies. It is also appropriate, on the other hand, to better understand the ecology of these microorganisms, especially in analysing in detail the interactions between common water bacteria and pathogens in such diverse habitats as free water and biofilms. It appears essential to distinguish two categories of drinking-water sources: surface water and groundwater under the direct influence of surface water",
"title": ""
},
{
"docid": "42cfbb2b2864e57d59a72ec91f4361ff",
"text": "Objective. This prospective open trial aimed to evaluate the efficacy and safety of isotretinoin (13-cis-retinoic acid) in patients with Cushing's disease (CD). Methods. Sixteen patients with CD and persistent or recurrent hypercortisolism after transsphenoidal surgery were given isotretinoin orally for 6-12 months. The drug was started on 20 mg daily and the dosage was increased up to 80 mg daily if needed and tolerated. Clinical, biochemical, and hormonal parameters were evaluated at baseline and monthly for 6-12 months. Results. Of the 16 subjects, 4% (25%) persisted with normal urinary free cortisol (UFC) levels at the end of the study. UFC reductions of up to 52.1% were found in the rest. Only patients with UFC levels below 2.5-fold of the upper limit of normal achieved sustained UFC normalization. Improvements of clinical and biochemical parameters were also noted mostly in responsive patients. Typical isotretinoin side-effects were experienced by 7 patients (43.7%), though they were mild and mostly transient. We also observed that the combination of isotretinoin with cabergoline, in relatively low doses, may occasionally be more effective than either drug alone. Conclusions. Isotretinoin may be an effective and safe therapy for some CD patients, particularly those with mild hypercortisolism.",
"title": ""
},
{
"docid": "d9df79b3724820a12d5517f2dcdc33ff",
"text": "The tremendous growth of machine-tomachine (M2M) applications has been a great attractor to cellular network operators to provide machine-type communication services. One of the important challenges for cellular systems supporting M2M terminals is coverage, because terminals can be located in spaces in buildings and structures suffering from significant penetration losses. Since these terminals are also often stationary, they are permanently without cellular coverage. To address this critical issue, the third generation partnership project (3GPP), and in particular its radio access network technical specification group, commenced work on coverage enhancement (CE) for long-term evolution (LTE) systems in June 2013. This article reviews the CE objectives defined for LTE machine-type communication and presents CE methods for LTE downlink and uplink channels discussed in this group. The presented methods achieve CE in a spectrally efficient manner and without notably affecting performance for legacy (non- M2M) devices.",
"title": ""
},
{
"docid": "b64664ba7102e070d276cce2e06dc8ab",
"text": "BACKGROUND\nNew technologies have recently been used for monitoring signs and symptoms of mental health illnesses and particularly have been tested to improve the outcomes in bipolar disorders. Web-based psychoeducational programs for bipolar disorders have also been implemented, yet to our knowledge, none of them have integrated both approaches in one single intervention. The aim of this project is to develop and validate a smartphone application to monitor symptoms and signs and empower the self-management of bipolar disorder, offering customized embedded psychoeducation contents, in order to identify early symptoms and prevent relapses and hospitalizations.\n\n\nMETHODS/DESIGN\nThe project will be carried out in three complementary phases, which will include a feasibility study (first phase), a qualitative study (second phase) and a randomized controlled trial (third phase) comparing the smartphone application (SIMPLe) on top of treatment as usual with treatment as usual alone. During the first phase, feasibility and satisfaction will be assessed with the application usage log data and with an electronic survey. Focus groups will be conducted and technical improvements will be incorporated at the second phase. Finally, at the third phase, survival analysis with multivariate data analysis will be performed and relationships between socio-demographic, clinical variables and assessments scores with relapses in each group will be explored.\n\n\nDISCUSSION\nThis project could result in a highly available, user-friendly and not costly monitoring and psychoeducational intervention that could improve the outcome of people suffering from bipolar disorders in a practical and secure way.\n\n\nTRIAL REGISTRATION\nClinical Trials.gov: NCT02258711 (October 2014).",
"title": ""
}
] |
scidocsrr
|
0ddad1e88882cc7b10135b89db8b3d78
|
Out of the Box: Reasoning with Graph Convolution Nets for Factual Visual Question Answering
|
[
{
"docid": "673bf6ecf9ae6fb61f7b01ff284c0a5f",
"text": "We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual question answering.",
"title": ""
}
] |
[
{
"docid": "704598402da135b6b7e3251de4c6edf8",
"text": "Almost every complex software system today is configurable. While configurability has many benefits, it challenges performance prediction, optimization, and debugging. Often, the influences of individual configuration options on performance are unknown. Worse, configuration options may interact, giving rise to a configuration space of possibly exponential size. Addressing this challenge, we propose an approach that derives a performance-influence model for a given configurable system, describing all relevant influences of configuration options and their interactions. Our approach combines machine-learning and sampling heuristics in a novel way. It improves over standard techniques in that it (1) represents influences of options and their interactions explicitly (which eases debugging), (2) smoothly integrates binary and numeric configuration options for the first time, (3) incorporates domain knowledge, if available (which eases learning and increases accuracy), (4) considers complex constraints among options, and (5) systematically reduces the solution space to a tractable size. A series of experiments demonstrates the feasibility of our approach in terms of the accuracy of the models learned as well as the accuracy of the performance predictions one can make with them.",
"title": ""
},
{
"docid": "eb42c7dafed682a0643b46f49d2a86ec",
"text": "OBJECTIVE\nTo evaluate the effectiveness of telephone based peer support in the prevention of postnatal depression.\n\n\nDESIGN\nMultisite randomised controlled trial.\n\n\nSETTING\nSeven health regions across Ontario, Canada.\n\n\nPARTICIPANTS\n701 women in the first two weeks postpartum identified as high risk for postnatal depression with the Edinburgh postnatal depression scale and randomised with an internet based randomisation service.\n\n\nINTERVENTION\nProactive individualised telephone based peer (mother to mother) support, initiated within 48-72 hours of randomisation, provided by a volunteer recruited from the community who had previously experienced and recovered from self reported postnatal depression and attended a four hour training session.\n\n\nMAIN OUTCOME MEASURES\nEdinburgh postnatal depression scale, structured clinical interview-depression, state-trait anxiety inventory, UCLA loneliness scale, and use of health services.\n\n\nRESULTS\nAfter web based screening of 21 470 women, 701 (72%) eligible mothers were recruited. A blinded research nurse followed up more than 85% by telephone, including 613 at 12 weeks and 600 at 24 weeks postpartum. At 12 weeks, 14% (40/297) of women in the intervention group and 25% (78/315) in the control group had an Edinburgh postnatal depression scale score >12 (chi(2)=12.5, P<0.001; number need to treat 8.8, 95% confidence interval 5.9 to 19.6; relative risk reduction 0.46, 95% confidence interval 0.24 to 0.62). There was a positive trend in favour of the intervention group for maternal anxiety but not loneliness or use of health services. For ethical reasons, participants identified with clinical depression at 12 weeks were referred for treatment, resulting in no differences between groups at 24 weeks. Of the 221 women in the intervention group who received and evaluated their experience of peer support, over 80% were satisfied and would recommend this support to a friend.\n\n\nCONCLUSION\nTelephone based peer support can be effective in preventing postnatal depression among women at high risk.\n\n\nTRIAL REGISTRATION\nISRCTN 68337727.",
"title": ""
},
{
"docid": "d08529ef66abefda062a414acb278641",
"text": "Spend your few moment to read a book even only few pages. Reading book is not obligation and force for everybody. When you don't want to read, you can get punishment from the publisher. Read a book becomes a choice of your different characteristics. Many people with reading habit will always be enjoyable to read, or on the contrary. For some reasons, this inductive logic programming techniques and applications tends to be the representative book in this website.",
"title": ""
},
{
"docid": "874876e2ed9e4a2ba044cf62d408da55",
"text": "It is widely believed that refactoring improves software quality and programmer productivity by making it easier to maintain and understand software systems. However, the role of refactorings has not been systematically investigated using fine-grained evolution history. We quantitatively and qualitatively studied API-level refactorings and bug fixes in three large open source projects, totaling 26523 revisions of evolution.\n The study found several surprising results: One, there is an increase in the number of bug fixes after API-level refactorings. Two, the time taken to fix bugs is shorter after API-level refactorings than before. Three, a large number of refactoring revisions include bug fixes at the same time or are related to later bug fix revisions. Four, API-level refactorings occur more frequently before than after major software releases. These results call for re-thinking refactoring's true benefits. Furthermore, frequent floss refactoring mistakes observed in this study call for new software engineering tools to support safe application of refactoring and behavior modifying edits together.",
"title": ""
},
{
"docid": "1a58f72cd0f6e979a72dbc233e8c4d4a",
"text": "The revolution of genome sequencing is continuing after the successful second-generation sequencing (SGS) technology. The third-generation sequencing (TGS) technology, led by Pacific Biosciences (PacBio), is progressing rapidly, moving from a technology once only capable of providing data for small genome analysis, or for performing targeted screening, to one that promises high quality de novo assembly and structural variation detection for human-sized genomes. In 2014, the MinION, the first commercial sequencer using nanopore technology, was released by Oxford Nanopore Technologies (ONT). MinION identifies DNA bases by measuring the changes in electrical conductivity generated as DNA strands pass through a biological pore. Its portability, affordability, and speed in data production makes it suitable for real-time applications, the release of the long read sequencer MinION has thus generated much excitement and interest in the genomics community. While de novo genome assemblies can be cheaply produced from SGS data, assembly continuity is often relatively poor, due to the limited ability of short reads to handle long repeats. Assembly quality can be greatly improved by using TGS long reads, since repetitive regions can be easily expanded into using longer sequencing lengths, despite having higher error rates at the base level. The potential of nanopore sequencing has been demonstrated by various studies in genome surveillance at locations where rapid and reliable sequencing is needed, but where resources are limited.",
"title": ""
},
{
"docid": "44e3ca0f64566978c3e0d0baeaa93543",
"text": "Many applications of fast Fourier transforms (FFT’s), such as computer tomography, geophysical signal processing, high-resolution imaging radars, and prediction filters, require high-precision output. An error analysis reveals that the usual method of fixed-point computation of FFT’s of vectors of length2 leads to an average loss of/2 bits of precision. This phenomenon, often referred to as computational noise, causes major problems for arithmetic units with limited precision which are often used for real-time applications. Several researchers have noted that calculation of FFT’s with algebraic integers avoids computational noise entirely, see, e.g., [1]. We will combine a new algorithm for approximating complex numbers by cyclotomic integers with Chinese remaindering strategies to give an efficient algorithm to compute -bit precision FFT’s of length . More precisely, we will approximate complex numbers by cyclotomic integers in [ 2 2 ] whose coefficients, when expressed as polynomials in 2 2 , are bounded in absolute value by some integer . For fixed our algorithm runs in time (log( )), and produces an approximation with worst case error of (1 2 ). We will prove that this algorithm has optimal worst case error by proving a corresponding lower bound on the worst case error of any approximation algorithm for this task. The main tool for designing the algorithms is the use of the cyclotomic units, a subgroup of finite index in the unit group of the cyclotomic field. First implementations of our algorithms indicate that they are fast enough to be used for the design of low-cost high-speed/highprecision FFT chips.",
"title": ""
},
{
"docid": "a2617ce3b0d618a5e4b61033345d59b7",
"text": "Asymmetry of the eyelid crease is a major complication following double eyelid blepharoplasty; the reasons are multivariate. This study presents, for the first time, a novel method, based on high-definition magnetic resonance imaging and high-precision weighing of tissue, for quantitating preoperative asymmetry of eyelid thickness in young Chinese women presenting for blepharoplasty. From 1 January 2008 to 1 October 2011, we studied 1217 women requesting double eyelid blepharoplasty. The patients ranged in age from 17 to 24 years (average 21.13 years). All patients were of Chinese Han nationality. Soft-tissue thickness at the tarsal plate superior border was 5.05 ± 1.01 units on the right side and 4.12 ± 0.96 units on the left. The submuscular fibro-adipose tissue area was 95.12 ± 23.27 unit(2) on the right side and 76.05 ± 21.11 unit(2) on the left. The pre-aponeurotic fat pad area was 112.33 ± 29.16 unit(2) on the right side and 91.25 ± 27.32 unit(2) on the left. The orbicularis muscle resected weighed 0.185 ± 0.055 g on the right side and 0.153 ± 0.042 g on the left; the orbital fat resected weighed 0.171 ± 0.062 g on the right side and 0.106 ± 0.057 g on the left. In conclusion, upper eyelid thickness asymmetry is a common phenomenon in young Chinese women who wish to undertake double eyelid blepharoplasty. We have demonstrated that the orbicularis muscle and orbital fat pad are consistently thicker on the right side than on the left.",
"title": ""
},
{
"docid": "1bbd0eca854737c94e62442ee4cedac8",
"text": "Most convolutional neural networks (CNNs) lack midlevel layers that model semantic parts of objects. This limits CNN-based methods from reaching their full potential in detecting and utilizing small semantic parts in recognition. Introducing such mid-level layers can facilitate the extraction of part-specific features which can be utilized for better recognition performance. This is particularly important in the domain of fine-grained recognition. In this paper, we propose a new CNN architecture that integrates semantic part detection and abstraction (SPDACNN) for fine-grained classification. The proposed network has two sub-networks: one for detection and one for recognition. The detection sub-network has a novel top-down proposal method to generate small semantic part candidates for detection. The classification sub-network introduces novel part layers that extract features from parts detected by the detection sub-network, and combine them for recognition. As a result, the proposed architecture provides an end-to-end network that performs detection, localization of multiple semantic parts, and whole object recognition within one framework that shares the computation of convolutional filters. Our method outperforms state-of-theart methods with a large margin for small parts detection (e.g. our precision of 93.40% vs the best previous precision of 74.00% for detecting the head on CUB-2011). It also compares favorably to the existing state-of-the-art on finegrained classification, e.g. it achieves 85.14% accuracy on CUB-2011.",
"title": ""
},
{
"docid": "ecfd9b38cc68c4af9addb4915424d6d0",
"text": "The conditions for antenna diversity action are investigated. In terms of the fields, a condition is shown to be that the incident field and the far field of the diversity antenna should obey (or nearly obey) an orthogonality relationship. The role of mutual coupling is central, and it is different from that in a conventional array antenna. In terms of antenna parameters, a sufficient condition for diversity action for a certain class of high gain antennas at the mobile, which approximates most practical mobile antennas, is shown to be zero (or low) mutual resistance between elements. This is not the case at the base station, where the condition is necessary only. The mutual resistance condition offers a powerful design tool, and examples of new mobile diversity antennas are discussed along with some existing designs.",
"title": ""
},
{
"docid": "49e91d22adb0cdeb014b8330e31f226d",
"text": "Ghrelin increases non-REM sleep and decreases REM sleep in young men but does not affect sleep in young women. In both sexes, ghrelin stimulates the activity of the somatotropic and the hypothalamic-pituitary-adrenal (HPA) axis, as indicated by increased growth hormone (GH) and cortisol plasma levels. These two endocrine axes are crucially involved in sleep regulation. As various endocrine effects are age-dependent, aim was to study ghrelin's effect on sleep and secretion of GH and cortisol in elderly humans. Sleep-EEGs (2300-0700 h) and secretion profiles of GH and cortisol (2000-0700 h) were determined in 10 elderly men (64.0+/-2.2 years) and 10 elderly, postmenopausal women (63.0+/-2.9 years) twice, receiving 50 microg ghrelin or placebo at 2200, 2300, 0000, and 0100 h, in this single-blind, randomized, cross-over study. In men, ghrelin compared to placebo was associated with significantly more stage 2 sleep (placebo: 183.3+/-6.1; ghrelin: 221.0+/-12.2 min), slow wave sleep (placebo: 33.4+/-5.1; ghrelin: 44.3+/-7.7 min) and non-REM sleep (placebo: 272.6+/-12.8; ghrelin: 318.2+/-11.0 min). Stage 1 sleep (placebo: 56.9+/-8.7; ghrelin: 50.9+/-7.6 min) and REM sleep (placebo: 71.9+/-9.1; ghrelin: 52.5+/-5.9 min) were significantly reduced. Furthermore, delta power in men was significantly higher and alpha power and beta power were significantly lower after ghrelin than after placebo injection during the first half of night. In women, no effects on sleep were observed. In both sexes, ghrelin caused comparable increases and secretion patterns of GH and cortisol. In conclusion, ghrelin affects sleep in elderly men but not women resembling findings in young subjects.",
"title": ""
},
{
"docid": "8b581e9ae50ed1f1aa1077f741fa4504",
"text": "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.",
"title": ""
},
{
"docid": "6a80eb8001380f4d63a8cf3f3693f73c",
"text": "Traditional energy measurement fails to provide support to consumers to make intelligent decisions to save energy. Non-intrusive load monitoring is one solution that provides disaggregated power consumption profiles. Machine learning approaches rely on public datasets to train parameters for their algorithms, most of which only provide low-frequency appliance-level measurements, thus limiting the available feature space for recognition.\n In this paper, we propose a low-cost measurement system for high-frequency energy data. Our work utilizes an off-the-shelf power strip with a voltage-sensing circuit, current sensors, and a single-board PC as data aggregator. We develop a new architecture and evaluate the system in real-world environments. The self-contained unit for six monitored outlets can achieve up to 50 kHz for all signals simultaneously. A simple design and off-the-shelf components allow us to keep costs low. Equipping a building with our measurement systems is more feasible compared to expensive existing solutions. We used the outlined system architecture to manufacture 20 measurement systems to collect energy data over several months of more than 50 appliances at different locations, with an aggregated size of 15 TB.",
"title": ""
},
{
"docid": "c998c8d5cc17ba668492d813d522a17d",
"text": "This paper presents a 3D face reconstruction method based on multi-view stereo algorithm, the proposed algorithm reconstructs 3D face model from videos captured around static human faces. Image sequence is processed as the input of shape from motion algorithm to estimate camera parameters and camera positions, 3D points with different denseness degree could be acquired by using a method named patch based multi-view stereopsis, finally, the proposed method uses a surface reconstruction algorithm to generate a watertight 3D face model. The proposed approach can automatically detect facial feature points; it does not need any initialization and special equipments; videos can be obtained with commonly used picture pick-up device such as mobile phones. Several groups of experiments have been conducted to validate the availability of the proposed method.",
"title": ""
},
{
"docid": "70b62dfeab05d3bc4c64199a5cea3b1a",
"text": "Sleep timing undergoes profound changes during adolescence, often resulting in inadequate sleep duration. The present study examines the relationship of sleep duration with positive attitude toward life and academic achievement in a sample of 2716 adolescents in Switzerland (mean age: 15.4 years, SD = 0.8), and whether this relationship is mediated by increased daytime tiredness and lower self-discipline/behavioral persistence. Further, we address the question whether adolescents who start school modestly later (20 min; n = 343) receive more sleep and report better functioning. Sleeping less than an average of 8 h per night was related to more tiredness, inferior behavioral persistence, less positive attitude toward life, and lower school grades, as compared to longer sleep duration. Daytime tiredness and behavioral persistence mediated the relationship between short sleep duration and positive attitude toward life and school grades. Students who started school 20 min later received reliably more sleep and reported less tiredness.",
"title": ""
},
{
"docid": "2fbd1b2e25473affb40990195b26a88b",
"text": "In this paper we considerably improve on a state-of-the-art alpha matting approach by incorporating a new prior which is based on the image formation process. In particular, we model the prior probability of an alpha matte as the convolution of a high-resolution binary segmentation with the spatially varying point spread function (PSF) of the camera. Our main contribution is a new and efficient de-convolution approach that recovers the prior model, given an approximate alpha matte. By assuming that the PSF is a kernel with a single peak, we are able to recover the binary segmentation with an MRF-based approach, which exploits flux and a new way of enforcing connectivity. The spatially varying PSF is obtained via a partitioning of the image into regions of similar defocus. Incorporating our new prior model into a state-of-the-art matting technique produces results that outperform all competitors, which we confirm using a publicly available benchmark.",
"title": ""
},
{
"docid": "e7790dcba1b3982f8cf46ae7dc78fc11",
"text": "This paper introduces a new approach for expansion of queries with geographical context. The proposed strategy is based on a query parser that captures geonames and spatial relationships, and maps geographical features and feature types into concepts of a geographical ontology. Different strategies for query expansion, according to the geographical restrictions given by the user, are compared. The proposed method allows a more versatile and focused expansion towards the geographical information need of the user.",
"title": ""
},
{
"docid": "e2649203ae3e8648c8ec1eafb7a19d6e",
"text": "This paper describes an algorithm to extract adaptive and quality quadrilateral/hexahedral meshes directly from volumetric data. First, a bottom-up surface topology preserving octree-based algorithm is applied to select a starting octree level. Then the dual contouring method is used to extract a preliminary uniform quad/hex mesh, which is decomposed into finer quads/hexes adaptively without introducing any hanging nodes. The positions of all boundary vertices are recalculated to approximate the boundary surface more accurately. Mesh adaptivity can be controlled by a feature sensitive error function, the regions that users are interested in, or finite element calculation results. Finally, a relaxation based technique is deployed to improve mesh quality. Several demonstration examples are provided from a wide variety of application domains. Some extracted meshes have been extensively used in finite element simulations.",
"title": ""
},
{
"docid": "cfe1b91f879ab59b3afcfe2bf64c911e",
"text": "We consider a variant of the classical three-peg Tower of Hanoi problem, where limitations on the possible moves among the pegs are imposed. Each variant corresponds to a di-graph whose vertices are the pegs, and an edge from one vertex to another designates the ability of moving a disk from the first peg to the other, provided that the rules concerning the disk sizes are obeyed. There are five non-isomorphic graphs on three vertices, which are strongly connected—a sufficient condition for the existence of a solution to the problem. We provide optimal algorithms for the problem for all these graphs, and find the number of moves each requires.",
"title": ""
},
{
"docid": "86f1eb528e5d062a4a8d7c2d03ae4016",
"text": "Recent advances in representation learning on graphs, mainly leveraging graph convolutional networks, have brought a substantial improvement on many graphbased benchmark tasks. While novel approaches to learning node embeddings are highly suitable for node classification and link prediction, their application to graph classification (predicting a single label for the entire graph) remains mostly rudimentary, typically using a single global pooling step to aggregate node features or a hand-designed, fixed heuristic for hierarchical coarsening of the graph structure. An important step towards ameliorating this is differentiable graph coarsening—the ability to reduce the size of the graph in an adaptive, datadependent manner within a graph neural network pipeline, analogous to image downsampling within CNNs. However, the previous prominent approach to pooling has quadratic memory requirements during training and is therefore not scalable to large graphs. Here we combine several recent advances in graph neural network design to demonstrate that competitive hierarchical graph classification results are possible without sacrificing sparsity. Our results are verified on several established graph classification benchmarks, and highlight an important direction for future research in graph-based neural networks.",
"title": ""
},
{
"docid": "ed9d72566cdf3e353bf4b1e589bf85eb",
"text": "In the last few years progress has been made in understanding basic mechanisms involved in damage to the inner ear and various potential therapeutic approaches have been developed. It was shown that hair cell loss mediated by noise or toxic drugs may be prevented by antioxidants, inhibitors of intracellular stress pathways and neurotrophic factors/neurotransmission blockers. Moreover, there is hope that once hair cells are lost, their regeneration can be induced or that stem cells can be used to build up new hair cells. However, although tremendous progress has been made, most of the concepts discussed in this review are still in the \"animal stage\" and it is difficult to predict which approach will finally enter clinical practice. In my opinion it is highly probable that some concepts of hair cell protection will enter clinical practice first, while others, such as the use of stem cells to restore hearing, are still far from clinical utility.",
"title": ""
}
] |
scidocsrr
|
ccba46f6feea5bbb3fb3fc700b51ebd0
|
Credit Scoring Models Using Soft Computing Methods: A Survey
|
[
{
"docid": "5b9baa6587bc70c17da2b0512545c268",
"text": "Credit scoring models have been widely studied in the areas of statistics, machine learning, and artificial intelligence (AI). Many novel approaches such as artificial neural networks (ANNs), rough sets, or decision trees have been proposed to increase the accuracy of credit scoring models. Since an improvement in accuracy of a fraction of a percent might translate into significant savings, a more sophisticated model should be proposed to significantly improving the accuracy of the credit scoring mode. In this paper, genetic programming (GP) is used to build credit scoring models. Two numerical examples will be employed here to compare the error rate to other credit scoring models including the ANN, decision trees, rough sets, and logistic regression. On the basis of the results, we can conclude that GP can provide better performance than other models. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "537076966f77631a3e915eccc8223d2b",
"text": "Finding domain invariant features is critical for successful domain adaptation and transfer learning. However, in the case of unsupervised adaptation, there is a significant risk of overfitting on source training data. Recently, a regularization for domain adaptation was proposed for deep models by (Ganin and Lempitsky, 2015). We build on their work by suggesting a more appropriate regularization for denoising autoencoders. Our model remains unsupervised and can be computed in a closed form. On standard text classification adaptation tasks, our approach yields the state of the art results, with an important reduction of the learning cost.",
"title": ""
},
{
"docid": "7f2857c1bd23c7114d58c290f21bf7bd",
"text": "Many contemporary organizations are placing a greater emphasis on their performance management systems as a means of generating higher levels of job performance. We suggest that producing performance increments may be best achieved by orienting the performance management system to promote employee engagement. To this end, we describe a new approach to the performance management process that includes employee engagement and the key drivers of employee engagement at each stage. We present a model of engagement management that incorporates the main ideas of the paper and suggests a new perspective for thinking about how to foster and manage employee engagement to achieve high levels of job",
"title": ""
},
{
"docid": "4c6efebdf08a3c1c4cefc9cdd8950bab",
"text": "Four patients are presented with the Goldenhar syndrome (GS) and cranial defects consisting of plagiocephaly, microcephaly, skull defects, or intracranial dermoid cysts. Twelve cases from the literature add hydrocephalus, encephalocele, and arhinencephaly to a growing list of brain anomalies in GS. As a group, these patients emphasize the variability of GS and the increased risk for developmental retardation with multiple, severe, or unusual manifestations. The temporal relation of proposed teratogenic events in GS provides an opportunity to reconstruct biological relationships within the 3-5-week human embryo.",
"title": ""
},
{
"docid": "8ccbf0f95df6d4d3c8eba33befc0f6b7",
"text": "Tactile graphics play an essential role in knowledge transfer for blind people. The tactile exploration of these graphics is often challenging because of the cognitive load caused by physiological constraints and their complexity. The coupling of physical tactile graphics with electronic devices offers to support the tactile exploration by auditory feedback. Often, these systems have strict constraints regarding their mobility or the process of coupling both components. Additionally, visually impaired people cannot appropriately benefit from their residual vision. This article presents a concept for 3D printed tactile graphics, which offers to use audio-tactile graphics with usual smartphones or tablet-computers. By using capacitive markers, the coupling of the tactile graphics with the mobile device is simplified. These tactile graphics integrating these markers can be printed in one turn by off-the-shelf 3D printers without any post-processing and allows us to use multiple elevation levels for graphical elements. Based on the developed generic concept on visually augmented audio-tactile graphics, we presented a case study for maps. A prototypical implementation was tested by a user study with visually impaired people. All the participants were able to interact with the 3D printed tactile maps using a standard tablet computer. To study the effect of visual augmentation of graphical elements, we conducted another comprehensive user study. We tested multiple types of graphics and obtained evidence that visual augmentation may offer clear advantages for the exploration of tactile graphics. Even participants with a minor residual vision could solve the tasks with visual augmentation more quickly and accurately.",
"title": ""
},
{
"docid": "296e9204869a3a453dd304fc3b4b8c4b",
"text": "Today, travelers are provided large amount information which includes Web sites and tourist magazines about introduction of tourist spot. However, it is not easy for users to process the information in a short time. Therefore travelers prefer to receive pertinent information easier and have that information presented in a clear and concise manner. This paper proposes a personalization method for tourist Point of Interest (POI) Recommendation.",
"title": ""
},
{
"docid": "13e84c1160fbffd1d8f91d5274c4d8cc",
"text": "This paper presents and demonstrates a class of 3-D integration platforms of substrate-integrated waveguide (SIW). The proposed right angle E-plane corner based on SIW technology enables the implementation of various 3-D architectures of planar circuits with the printed circuit board and other similar processes. This design scheme brings up attractive advantages in terms of cost, flexibility, and integration. Two circuit prototypes with both 0- and 45° vertical rotated arms are demonstrated. The straight version of the prototypes shows 0.5 dB of insertion loss from 30 to 40 GHz, while the rotated version gives 0.7 dB over the same frequency range. With this H-to-E-plane interconnect, a T-junction is studied and designed. Simulated results show 20-dB return loss over 19.25% of bandwidth. Measured results suggest an excellent performance within the experimental frequency range of 32-37.4 GHz, with 10-dB return loss and less than ±4° phase imbalance. An optimized wideband magic-T structure is demonstrated and fabricated. Both simulated and measured results show a very promising performance with very good isolation and power equality. With two 45° vertical rotated arm bends, two antennas are used to build up a dual polarization system. An isolation of 20 dB is shown over 32-40 GHz and the radiation patterns of the antenna are also given.",
"title": ""
},
{
"docid": "309e14c07a3a340f7da15abeb527231d",
"text": "The random forest algorithm, proposed by L. Breiman in 2001, has been extremely successful as a general-purpose classification and regression method. The approach, which combines several randomized decision trees and aggregates their predictions by averaging, has shown excellent performance in settings where the number of variables is much larger than the number of observations. Moreover, it is versatile enough to be applied to large-scale problems, is easily adapted to various ad-hoc learning tasks, and returns measures of variable importance. The present article reviews the most recent theoretical and methodological developments for random forests. Emphasis is placed on the mathematical forces driving the algorithm, with special attention given to the selection of parameters, the resampling mechanism, and variable importance measures. This review is intended to provide non-experts easy access to the main ideas.",
"title": ""
},
{
"docid": "7f4701d8c9f651c3a551a91d19fd28d9",
"text": "Road extraction from aerial images has been a hot research topic in the field of remote sensing image analysis. In this letter, a semantic segmentation neural network, which combines the strengths of residual learning and U-Net, is proposed for road area extraction. The network is built with residual units and has similar architecture to that of U-Net. The benefits of this model are twofold: first, residual units ease training of deep networks. Second, the rich skip connections within the network could facilitate information propagation, allowing us to design networks with fewer parameters, however, better performance. We test our network on a public road data set and compare it with U-Net and other two state-of-the-art deep-learning-based road extraction methods. The proposed approach outperforms all the comparing methods, which demonstrates its superiority over recently developed state of the arts.",
"title": ""
},
{
"docid": "66b680500240631b9a4b682b33a5bafa",
"text": "Multichannel customer management is “the design, deployment, and evaluation of channels to enhance customer value through effective customer acquisition, retention, and development” (Neslin, Scott A., D. Grewal, R. Leghorn, V. Shankar, M. L. Teerling, J. S. Thomas, P. C. Verhoef (2006), Challenges and Opportunities in Multichannel Management. Journal of Service Research 9(2) 95–113). Channels typically include the store, the Web, catalog, sales force, third party agency, call center and the like. In recent years, multichannel marketing has grown tremendously and is anticipated to grow even further. While we have developed a good understanding of certain issues such as the relative value of a multichannel customer over a single channel customer, several research and managerial questions still remain. We offer an overview of these emerging issues, present our future outlook, and suggest important avenues for future research. © 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a4099a526548c6d00a91ea21b9f2291d",
"text": "The robust principal component analysis (robust PCA) problem has been considered in many machine learning applications, where the goal is to decompose the data matrix to a low rank part plus a sparse residual. While current approaches are developed by only considering the low rank plus sparse structure, in many applications, side information of row and/or column entities may also be given, and it is still unclear to what extent could such information help robust PCA. Thus, in this paper, we study the problem of robust PCA with side information, where both prior structure and features of entities are exploited for recovery. We propose a convex problem to incorporate side information in robust PCA and show that the low rank matrix can be exactly recovered via the proposed method under certain conditions. In particular, our guarantee suggests that a substantial amount of low rank matrices, which cannot be recovered by standard robust PCA, become recoverable by our proposed method. The result theoretically justifies the effectiveness of features in robust PCA. In addition, we conduct synthetic experiments as well as a real application on noisy image classification to show that our method also improves the performance in practice by exploiting side information.",
"title": ""
},
{
"docid": "40c5f333d037f1e9a26e186d823b336e",
"text": "We present a simple, prepackaged solution to generating paraphrases of English sentences. We use the Paraphrase Database (PPDB) for monolingual sentence rewriting and provide machine translation language packs: prepackaged, tuned models that can be downloaded and used to generate paraphrases on a standard Unix environment. The language packs can be treated as a black box or customized to specific tasks. In this demonstration, we will explain how to use the included interactive webbased tool to generate sentential paraphrases.",
"title": ""
},
{
"docid": "c2b1bb55522213987573b22fa407c937",
"text": "We present a system for producing 3D animations using physical objects (i.e., puppets) as input. Puppeteers can load 3D models of familiar rigid objects, including toys, into our system and use them as puppets for an animation. During a performance, the puppeteer physically manipulates these puppets in front of a Kinect depth sensor. Our system uses a combination of image-feature matching and 3D shape matching to identify and track the physical puppets. It then renders the corresponding 3D models into a virtual set. Our system operates in real time so that the puppeteer can immediately see the resulting animation and make adjustments on the fly. It also provides 6D virtual camera \\\\rev{and lighting} controls, which the puppeteer can adjust before, during, or after a performance. Finally our system supports layered animations to help puppeteers produce animations in which several characters move at the same time. We demonstrate the accessibility of our system with a variety of animations created by puppeteers with no prior animation experience.",
"title": ""
},
{
"docid": "0a4392285df7ddb92458ffa390f36867",
"text": "A good model of object shape is essential in applications such as segmentation, detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shapes can help where object boundaries are noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to parts of the objects. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of deep Boltzmann machine (Salakhutdinov and Hinton, International Conference on Artificial Intelligence and Statistics, 2009) that we call a Shape Boltzmann Machine (SBM) for the task of modeling foreground/background (binary) and parts-based (categorical) shape images. We show that the SBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the SBM learns distributions that are qualitatively and quantitatively better than existing models for this task.",
"title": ""
},
{
"docid": "f465475eb7bb52d455e3ed77b4808d26",
"text": "Background Long-term dieting has been reported to reduce resting energy expenditure (REE) leading to weight regain once the diet has been curtailed. Diets are also difficult to follow for a significant length of time. The purpose of this preliminary proof of concept study was to examine the effects of short-term intermittent dieting during exercise training on REE and weight loss in overweight women.",
"title": ""
},
{
"docid": "3c13399d0c869e58830a7efb8f6832a8",
"text": "The use of supply frequencies above 50-60 Hz allows for an increase in the power density applied to the ozonizer electrode surface and an increase in ozone production for a given surface area, while decreasing the necessary peak voltage. Parallel-resonant converters are well suited for supplying the high capacitive load of ozonizers. Therefore, in this paper the current-fed parallel-resonant push-pull inverter is proposed as a good option to implement high-voltage high-frequency power supplies for ozone generators. The proposed converter is analyzed and some important characteristics are obtained. The design and implementation of the complete power supply are also shown. The UC3872 integrated circuit is proposed in order to operate the converter at resonance, allowing us to maintain a good response disregarding the changes in electric parameters of the transformer-ozonizer pair. Experimental results for a 50-W prototype are also provided.",
"title": ""
},
{
"docid": "b76d5cfc22d0c39649ca093111864926",
"text": "Runtime verification is the process of observing a sequence of events generated by a running system and comparing it to some formal specification for potential violations. We show how the use of a runtime monitor can greatly speed up the testing phase of a video game under development by automating the detection of bugs when the game is being played. We take advantage of the fact that a video game, contrarily to generic software, follows a special structure that contains a “game loop.” This game loop can be used to centralize the instrumentation and generate events based on the game's internal state. We report on experiments made on a sample of six real-world video games of various genres and sizes by successfully instrumenting and efficiently monitoring various temporal properties over their execution, including actual bugs reported in the games' bug tracking database in the course of their development.",
"title": ""
},
{
"docid": "d34d8dd7ba59741bb5e28bba3e870ac4",
"text": "Among those who have recently lost a job, social networks in general and online ones in particular may be useful to cope with stress and find new employment. This study focuses on the psychological and practical consequences of Facebook use following job loss. By pairing longitudinal surveys of Facebook users with logs of their online behavior, we examine how communication with different kinds of ties predicts improvements in stress, social support, bridging social capital, and whether they find new jobs. Losing a job is associated with increases in stress, while talking with strong ties is generally associated with improvements in stress and social support. Weak ties do not provide these benefits. Bridging social capital comes from both strong and weak ties. Surprisingly, individuals who have lost a job feel greater stress after talking with strong ties. Contrary to the \"strength of weak ties\" hypothesis, communication with strong ties is more predictive of finding employment within three months.",
"title": ""
},
{
"docid": "337a738d386fa66725fe9be620365d5f",
"text": "Change in a software is crucial to incorporate defect correction and continuous evolution of requirements and technology. Thus, development of quality models to predict the change proneness attribute of a software is important to effectively utilize and plan the finite resources during maintenance and testing phase of a software. In the current scenario, a variety of techniques like the statistical techniques, the Machine Learning (ML) techniques and the Search-based techniques (SBT) are available to develop models to predict software quality attributes. In this work, we assess the performance of ten machine learning and search-based techniques using data collected from three open source software. We first develop a change prediction model using one data set and then we perform inter-project validation using two other data sets in order to obtain unbiased and generalized results. The results of the study indicate comparable performance of SBT with other employed statistical and ML techniques. This study also supports inter project validation as we successfully applied the model created using the training data of one project on other similar projects and yield good results.",
"title": ""
},
{
"docid": "c6a649a1eed332be8fc39bfa238f4214",
"text": "The Internet of things (IoT), which integrates a variety of devices into networks to provide advanced and intelligent services, has to protect user privacy and address attacks such as spoofing attacks, denial of service (DoS) attacks, jamming, and eavesdropping. We investigate the attack model for IoT systems and review the IoT security solutions based on machine-learning (ML) techniques including supervised learning, unsupervised learning, and reinforcement learning (RL). ML-based IoT authentication, access control, secure offloading, and malware detection schemes to protect data privacy are the focus of this article. We also discuss the challenges that need to be addressed to implement these ML-based security schemes in practical IoT systems.",
"title": ""
},
{
"docid": "9975e61afd0bf521c3ffbf29d0f39533",
"text": "Computer security depends largely on passwords to authenticate human users. However, users have difficulty remembering passwords over time if they choose a secure password, i.e. a password that is long and random. Therefore, they tend to choose short and insecure passwords. Graphical passwords, which consist of clicking on images rather than typing alphanumeric strings, may help to overcome the problem of creating secure and memorable passwords. In this paper we describe PassPoints, a new and more secure graphical password system. We report an empirical study comparing the use of PassPoints to alphanumeric passwords. Participants created and practiced either an alphanumeric or graphical password. The participants subsequently carried out three longitudinal trials to input their password over the course of 6 weeks. The results show that the graphical password users created a valid password with fewer difficulties than the alphanumeric users. However, the graphical users took longer and made more invalid password inputs than the alphanumeric users while practicing their passwords. In the longitudinal trials the two groups performed similarly on memory of their password, but the graphical group took more time to input a password. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
3c813c21dbb065c9da5562d21be5b73b
|
Toxic Behaviors in Esports Games: Player Perceptions and Coping Strategies
|
[
{
"docid": "ac46286c7d635ccdcd41358666026c12",
"text": "This paper represents our first endeavor to explore how to better understand the complex nature, scope, and practices of eSports. Our goal is to explore diverse perspectives on what defines eSports as a starting point for further research. Specifically, we critically reviewed existing definitions/understandings of eSports in different disciplines. We then interviewed 26 eSports players and qualitatively analyzed their own perceptions of eSports. We contribute to further exploring definitions and theories of eSports for CHI researchers who have considered online gaming a serious and important area of research, and highlight opportunities for new avenues of inquiry for researchers who are interested in designing technologies for this unique genre.",
"title": ""
},
{
"docid": "3d7fabdd5f56c683de20640abccafc44",
"text": "The capacity to exercise control over the nature and quality of one's life is the essence of humanness. Human agency is characterized by a number of core features that operate through phenomenal and functional consciousness. These include the temporal extension of agency through intentionality and forethought, self-regulation by self-reactive influence, and self-reflectiveness about one's capabilities, quality of functioning, and the meaning and purpose of one's life pursuits. Personal agency operates within a broad network of sociostructural influences. In these agentic transactions, people are producers as well as products of social systems. Social cognitive theory distinguishes among three modes of agency: direct personal agency, proxy agency that relies on others to act on one's behest to secure desired outcomes, and collective agency exercised through socially coordinative and interdependent effort. Growing transnational embeddedness and interdependence are placing a premium on collective efficacy to exercise control over personal destinies and national life.",
"title": ""
}
] |
[
{
"docid": "244745da710e8c401173fe39359c7c49",
"text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.",
"title": ""
},
{
"docid": "9f5b61ad41dceff67ab328791ed64630",
"text": "In this paper we present a resource-adaptive framework for real-time vision-aided inertial navigation. Specifically, we focus on the problem of visual-inertial odometry (VIO), in which the objective is to track the motion of a mobile platform in an unknown environment. Our primary interest is navigation using miniature devices with limited computational resources, similar for example to a mobile phone. Our proposed estimation framework consists of two main components: (i) a hybrid EKF estimator that integrates two algorithms with complementary computational characteristics, namely a sliding-window EKF and EKF-based SLAM, and (ii) an adaptive image-processing module that adjusts the number of detected image features based oadaptive image-processing module that adjusts the number of detected image features based on the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework isn the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework is capable of real-time processing of image and inertial data on the processor of a mobile phone.",
"title": ""
},
{
"docid": "6779d20fd95ff4525404bdd4d3c7df4b",
"text": "A new method is presented for adaptive document image binarization, where the page is considered as a collection of subcomponents such as text, background and picture. The problems caused by noise, illumination and many source type-related degradations are addressed. Two new algorithms are applied to determine a local threshold for each pixel. The performance evaluation of the algorithm utilizes test images with ground-truth, evaluation metrics for binarization of textual and synthetic images, and a weight-based ranking procedure for the \"nal result presentation. The proposed algorithms were tested with images including di!erent types of document components and degradations. The results were compared with a number of known techniques in the literature. The benchmarking results show that the method adapts and performs well in each case qualitatively and quantitatively. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1dc7b9dc4f135625e2680dcde8c9e506",
"text": "This paper empirically analyzes di erent e ects of advertising in a nondurable, experience good market. A dynamic learning model of consumer behavior is presented in which we allow both \\informative\" e ects of advertising and \\prestige\" or \\image\" e ects of advertising. This learning model is estimated using consumer level panel data tracking grocery purchases and advertising exposures over time. Empirical results suggest that in this data, advertising's primary e ect was that of informing consumers. The estimates are used to quantify the value of this information to consumers and evaluate welfare implications of an alternative advertising regulatory regime. JEL Classi cations: D12, M37, D83 ' Economics Dept., Boston University, Boston, MA 02115 ([email protected]). This paper is a revised version of the second and third chapters of my doctoral dissertation at Yale University. Many thanks to my advisors: Steve Berry and Ariel Pakes, as well as Lanier Benkard, Russell Cooper, Gautam Gowrisankaran, Sam Kortum, Mike Riordan, John Rust, Roni Shachar, and many seminar participants, including most recently those at the NBER 1997Winter IO meetings, for advice and comments. I thank the Yale School of Management for gratefully providing the data used in this study. Financial support from the Cowles Foundation in the form of the Arvid Anderson Dissertation Fellowship is acknowledged and appreciated. All remaining errors in this paper are my own.",
"title": ""
},
{
"docid": "f26680bb9306ca413d0fd36efa406107",
"text": "Frequency-domain concepts and terminology are commonly used to describe antennas. These are very satisfactory for a CW or narrowband application. However, their validity is questionable for an instantaneous wideband excitation. Time-domain and/or wideband analyses can provide more insight and more effective terminology. Two approaches for this time-domain analysis have been described. The more complete one uses the transfer function, a function which describes the amplitude and phase of the response over the entire frequency spectrum. While this is useful for evaluating the overall response of a system, it may not be practical when trying to characterize an antenna's performance, and trying to compare it with that of other antennas. A more convenient and descriptive approach uses time-domain parameters, such as efficiency, energy pattern, receiving area, etc., with the constraint that the reference or excitation signal is known. The utility of both approaches, for describing the time-domain performance, was demonstrated for antennas which are both small and large, in comparison to the length of the reference signal. The approaches have also been used for other antennas, such as arrays, where they also could be applied to measure the effects of mutual impedance, for a wide-bandwidth signal. The time-domain ground-plane antenna range, on which these measurements were made, is suitable for symmetric antennas. However, the approach can be readily adapted to asymmetric antennas, without a ground plane, by using suitable reference antennas.<<ETX>>",
"title": ""
},
{
"docid": "c8b57dc6e3ef7c6b8712733ec6177275",
"text": "A student information system provides a simple interface for the easy collation and maintenance of all manner of student information. The creation and management of accurate, up-to-date information regarding students' academic careers is critical students and for the faculties and administration ofSebha University in Libya and for any other educational institution. A student information system deals with all kinds of data from enrollment to graduation, including program of study, attendance record, payment of fees and examination results to name but a few. All these dataneed to be made available through a secure, online interface embedded in auniversity's website. To lay the groundwork for such a system, first we need to build the student database to be integrated with the system. Therefore we proposed and implementedan online web-based system, which we named the student data system (SDS),to collect and correct all student data at Sebha University. The output of the system was evaluated by using a similarity (Euclidean distance) algorithm. The results showed that the new data collected by theSDS can fill the gaps and correct the errors in the old manual data records.",
"title": ""
},
{
"docid": "7b7e41ced300aeff7916509c04c4fd6a",
"text": "We present and evaluate various content-based recommendation models that make use of user and item profiles defined in terms of weighted lists of social tags. The studied approaches are adaptations of the Vector Space and Okapi BM25 information retrieval models. We empirically compare the recommenders using two datasets obtained from Delicious and Last.fm social systems, in order to analyse the performance of the approaches in scenarios with different domains and tagging behaviours.",
"title": ""
},
{
"docid": "3763da6b72ee0a010f3803a901c9eeb2",
"text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.",
"title": ""
},
{
"docid": "aa73df5eadafff7533994c05a8d3c415",
"text": "In this paper, we report on the outcomes of the European project EduWear. The aim of the project was to develop a construction kit with smart textiles and to examine its impact on young people. The construction kit, including a suitable programming environment and a workshop concept, was adopted by children in a number of workshops.\n The evaluation of the workshops showed that designing, creating, and programming wearables with a smart textile construction kit allows for creating personal meaningful projects which relate strongly to aspects of young people's life worlds. Through their construction activities, participants became more self-confident in dealing with technology and were able to draw relations between their own creations and technologies present in their environment. We argue that incorporating such constructionist processes into an appropriate workshop concept is essential for triggering thought processes about the character of digital media beyond the construction process itself.",
"title": ""
},
{
"docid": "f119b0ee9a237ab1e9acdae19664df0f",
"text": "Recent editorials in this journal have defended the right of eminent biologist James Watson to raise the unpopular hypothesis that people of sub-Saharan African descent score lower, on average, than people of European or East Asian descent on tests of general intelligence. As those editorials imply, the scientific evidence is substantial in showing a genetic contribution to these differences. The unjustified ill treatment meted out to Watson therefore requires setting the record straight about the current state of the evidence on intelligence, race, and genetics. In this paper, we summarize our own previous reviews based on 10 categories of evidence: The worldwide distribution of test scores; the g factor of mental ability; heritability differences; brain size differences; trans-racial adoption studies; racial admixture studies; regression-to-the-mean effects; related life-history traits; human origins research; and the poverty of predictions from culture-only explanations. The preponderance of evidence demonstrates that in intelligence, brain size, and other life-history variables, East Asians average a higher IQ and larger brain than Europeans who average a higher IQ and larger brain than Africans. Further, these group differences are 50–80% heritable. These are facts, not opinions and science must be governed by data. There is no place for the ‘‘moralistic fallacy’’ that reality must conform to our social, political, or ethical desires. !c 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7bd3f6b7b2f79f08534b70c16be91c02",
"text": "This paper describes a dual-loop delay-locked loop (DLL) which overcomes the problem of a limited delay range by using multiple voltage-controlled delay lines (VCDLs). A reference loop generates quadrature clocks, which are then delayed with controllable amounts by four VCDLs and multiplexed to generate the output clock in a main loop. This architecture enables the DLL to emulate the infinite-length VCDL with multiple finite-length VCDLs. The DLL incorporates a replica biasing circuit for low-jitter characteristics and a duty cycle corrector immune to prevalent process mismatches. A test chip has been fabricated using a 0.25m CMOS process. At 400 MHz, the peak-to-peak jitter with a quiet 2.5-V supply is 54 ps, and the supply-noise sensitivity is 0.32 ps/mV.",
"title": ""
},
{
"docid": "b0727e320a1c532bd3ede4fd892d8d01",
"text": "Semantic technologies could facilitate realizing features like interoperability and reasoning for Internet of Things (IoT). However, the dynamic and heterogeneous nature of IoT data, constrained resources, and real-time requirements set challenges for applying these technologies. In this paper, we study approaches for delivering semantic data from IoT nodes to distributed reasoning engines and reasoning over such data. We perform experiments to evaluate the scalability of these approaches and also study how reasoning is affected by different data aggregation strategies.",
"title": ""
},
{
"docid": "5a61c356940eef5eb18c53a71befbe5b",
"text": "Recently, plant construction throughout the world, including nuclear power plant construction, has grown significantly. The scale of Korea’s nuclear power plant construction in particular, has increased gradually since it won a contract for a nuclear power plant construction project in the United Arab Emirates in 2009. However, time and monetary resources have been lost in some nuclear power plant construction sites due to lack of risk management ability. The need to prevent losses at nuclear power plant construction sites has become more urgent because it demands professional skills and large-scale resources. Therefore, in this study, the Analytic Hierarchy Process (AHP) and Fuzzy Analytic Hierarchy Process (FAHP) were applied in order to make comparisons between decision-making methods, to assess the potential risks at nuclear power plant construction sites. To suggest the appropriate choice between two decision-making methods, a survey was carried out. From the results, the importance and the priority of 24 risk factors, classified by process, cost, safety, and quality, were analyzed. The FAHP was identified as a suitable method for risk assessment of nuclear power plant construction, compared with risk assessment using the AHP. These risk factors will be able to serve as baseline data for risk management in nuclear power plant construction projects.",
"title": ""
},
{
"docid": "d5ddc141311afb6050a58be88303b577",
"text": "Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster RCNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.",
"title": ""
},
{
"docid": "609cc8dd7323e817ddfc5314070a68bf",
"text": "We present EVO, an event-based visual odometry algorithm. Our algorithm successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semidense three-dimensional (3-D) map of the environment. The implementation runs in real time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes. To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with a recent event-based 3-D reconstruction algorithm in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in simultaneous localization and mapping by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras.",
"title": ""
},
{
"docid": "7eca894697ee372abe6f67a069dcd910",
"text": "Government agencies and consulting companies in charge of pavement management face the challenge of maintaining pavements in serviceable conditions throughout their life from the functional and structural standpoints. For this, the assessment and prediction of the pavement conditions are crucial. This study proposes a neuro-fuzzy model to predict the performance of flexible pavements using the parameters routinely collected by agencies to characterize the condition of an existing pavement. These parameters are generally obtained by performing falling weight deflectometer tests and monitoring the development of distresses on the pavement surface. The proposed hybrid model for predicting pavement performance was characterized by multilayer, feedforward neural networks that led the reasoning process of the IF-THEN fuzzy rules. The results of the neuro-fuzzy model were superior to those of the linear regression model in terms of accuracy in the approximation. The proposed neuro-fuzzy model showed good generalization capability, and the evaluation of the model performance produced satisfactory results, demonstrating the efficiency and potential of these new mathematical modeling techniques.",
"title": ""
},
{
"docid": "60bdd255a19784ed2d19550222e61b69",
"text": "Haptic feedback on touch-sensitive displays provides significant benefits in terms of reducing error rates, increasing interaction speed and minimizing visual distraction. This particularly holds true for multitasking situations such as the interaction with mobile devices or touch-based in-vehicle systems. In this paper, we explore how the interaction with tactile touchscreens can be modeled and enriched using a 2+1 state transition model. The model expands an approach presented by Buxton. We present HapTouch -- a force-sensitive touchscreen device with haptic feedback that allows the user to explore and manipulate interactive elements using the sense of touch. We describe the results of a preliminary quantitative study to investigate the effects of tactile feedback on the driver's visual attention, driving performance and operating error rate. In particular, we focus on how active tactile feedback allows the accurate interaction with small on-screen elements during driving. Our results show significantly reduced error rates and input time when haptic feedback is given.",
"title": ""
},
{
"docid": "255ff39001f9bbcd7b1e6fe96f588371",
"text": "We derive inner and outer bounds on the capacity region for a class of three-user partially connected interference channels. We focus on the impact of topology, interference alignment, and interplay between interference and noise. The representative channels we consider are the ones that have clear interference alignment gain. For these channels, Z-channel type outer bounds are tight to within a constant gap from capacity. We present near-optimal achievable schemes based on rate-splitting, lattice alignment, and successive decoding.",
"title": ""
},
{
"docid": "85b77b88c2a06603267b770dbad8ec73",
"text": "Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.",
"title": ""
},
{
"docid": "a9b366b2b127b093b547f8a10ac05ca5",
"text": "Each user session in an e-commerce system can be modeled as a sequence of web pages, indicating how the user interacts with the system and makes his/her purchase. A typical recommendation approach, e.g., Collaborative Filtering, generates its results at the beginning of each session, listing the most likely purchased items. However, such approach fails to exploit current viewing history of the user and hence, is unable to provide a real-time customized recommendation service. In this paper, we build a deep recurrent neural network to address the problem. The network tracks how users browse the website using multiple hidden layers. Each hidden layer models how the combinations of webpages are accessed and in what order. To reduce the processing cost, the network only records a finite number of states, while the old states collapse into a single history state. Our model refreshes the recommendation result each time when user opens a new web page. As user's session continues, the recommendation result is gradually refined. Furthermore, we integrate the recurrent neural network with a Feedfoward network which represents the user-item correlations to increase the prediction accuracy. Our approach has been applied to Kaola (http://www.kaola.com), an e-commerce website powered by the NetEase technologies. It shows a significant improvement over previous recommendation service.",
"title": ""
}
] |
scidocsrr
|
5b96fcdf269af950900d3a8246473724
|
3D Point Cloud Learning for Large-scale Environment Analysis and Place Recognition
|
[
{
"docid": "845ee0b77e30a01d87e836c6a84b7d66",
"text": "This paper proposes an efficient and effective scheme to applying the sliding window approach popular in computer vision to 3D data. Specifically, the sparse nature of the problem is exploited via a voting scheme to enable a search through all putative object locations at any orientation. We prove that this voting scheme is mathematically equivalent to a convolution on a sparse feature grid and thus enables the processing, in full 3D, of any point cloud irrespective of the number of vantage points required to construct it. As such it is versatile enough to operate on data from popular 3D laser scanners such as a Velodyne as well as on 3D data obtained from increasingly popular push-broom configurations. Our approach is “embarrassingly parallelisable” and capable of processing a point cloud containing over 100K points at eight orientations in less than 0.5s. For the object classes car, pedestrian and bicyclist the resulting detector achieves best-in-class detection and timing performance relative to prior art on the KITTI dataset as well as compared to another existing 3D object detection approach.",
"title": ""
},
{
"docid": "3da4bcf1e3bcb3c5feb27fd05e43da80",
"text": "This paper introduces a texture representation suitable for recognizing images of textured surfaces under a wide range of transformations, including viewpoint changes and nonrigid deformations. At the feature extraction stage, a sparse set of affine Harris and Laplacian regions is found in the image. Each of these regions can be thought of as a texture element having a characteristic elliptic shape and a distinctive appearance pattern. This pattern is captured in an affine-invariant fashion via a process of shape normalization followed by the computation of two novel descriptors, the spin image and the RIFT descriptor. When affine invariance is not required, the original elliptical shape serves as an additional discriminative feature for texture recognition. The proposed approach is evaluated in retrieval and classification tasks using the entire Brodatz database and a publicly available collection of 1,000 photographs of textured surfaces taken from different viewpoints.",
"title": ""
},
{
"docid": "7a8fbfe463f6d5c61df7db1c1d2670c9",
"text": "State-of-the-art autonomous driving systems rely heavily on detailed and highly accurate prior maps. However, outside of small urban areas, it is very challenging to build, store, and transmit detailed maps since the spatial scales are so large. Furthermore, maintaining detailed maps of large rural areas can be impracticable due to the rapid rate at which these environments can change. This is a significant limitation for the widespread applicability of autonomous driving technology, which has the potential for an incredibly positive societal impact. In this paper, we address the problem of autonomous navigation in rural environments through a novel mapless driving framework that combines sparse topological maps for global navigation with a sensor-based perception system for local navigation. First, a local navigation goal within the sensor view of the vehicle is chosen as a waypoint leading towards the global goal. Next, the local perception system generates a feasible trajectory in the vehicle frame to reach the waypoint while abiding by the rules of the road for the segment being traversed. These trajectories are updated to remain in the local frame using the vehicle's odometry and the associated uncertainty based on the least-squares residual and a recursive filtering approach, which allows the vehicle to navigate road networks reliably, and at high speed, without detailed prior maps. We demonstrate the performance of the system on a full-scale autonomous vehicle navigating in a challenging rural environment and benchmark the system on a large amount of collected data.",
"title": ""
},
{
"docid": "348a5c33bde53e7f9a1593404c6589b4",
"text": "Few prior works study deep learning on point sets. PointNet [20] is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.",
"title": ""
}
] |
[
{
"docid": "a854ee8cf82c4bd107e93ed0e70ee543",
"text": "Although the memorial benefits of testing are well established empirically, the mechanisms underlying this benefit are not well understood. The authors evaluated the mediator shift hypothesis, which states that test-restudy practice is beneficial for memory because retrieval failures during practice allow individuals to evaluate the effectiveness of mediators and to shift from less effective to more effective mediators. Across a series of experiments, participants used a keyword encoding strategy to learn word pairs with test-restudy practice or restudy only. Robust testing effects were obtained in all experiments, and results supported predictions of the mediator shift hypothesis. First, a greater proportion of keyword shifts occurred during test-restudy practice versus restudy practice. Second, a greater proportion of keyword shifts occurred after retrieval failure trials versus retrieval success trials during test-restudy practice. Third, a greater proportion of keywords were recalled on a final keyword recall test after test-restudy versus restudy practice.",
"title": ""
},
{
"docid": "cc3d0d9676ad19f71b4a630148c4211f",
"text": "OBJECTIVES\nPrevious studies have revealed that memory performance is diminished in chronic pain patients. Few studies, however, have assessed multiple components of memory in a single sample. It is currently also unknown whether attentional problems, which are commonly observed in chronic pain, mediate the decline in memory. Finally, previous studies have focused on middle-aged adults, and a possible detrimental effect of aging on memory performance in chronic pain patients has been commonly disregarded. This study, therefore, aimed at describing the pattern of semantic, working, and visual and verbal episodic memory performance in participants with chronic pain, while testing for possible contributions of attention and age to task performance.\n\n\nMETHODS\nThirty-four participants with chronic pain and 32 pain-free participants completed tests of episodic, semantic, and working memory to assess memory performance and a test of attention.\n\n\nRESULTS\nParticipants with chronic pain performed worse on tests of working memory and verbal episodic memory. A decline in attention explained some, but not all, group differences in memory performance. Finally, no additional effect of age on the diminished task performance in participants with chronic pain was observed.\n\n\nDISCUSSION\nTaken together, the results indicate that chronic pain significantly affects memory performance. Part of this effect may be caused by underlying attentional dysfunction, although this could not fully explain the observed memory decline. An increase in age in combination with the presence of chronic pain did not additionally affect memory performance.",
"title": ""
},
{
"docid": "4e791e4367b5ef9ff4259a87b919cff7",
"text": "Considerable attention has been paid to dating the earliest appearance of hominins outside Africa. The earliest skeletal and artefactual evidence for the genus Homo in Asia currently comes from Dmanisi, Georgia, and is dated to approximately 1.77–1.85 million years ago (Ma)1. Two incisors that may belong to Homo erectus come from Yuanmou, south China, and are dated to 1.7 Ma2; the next-oldest evidence is an H. erectus cranium from Lantian (Gongwangling)—which has recently been dated to 1.63 Ma3—and the earliest hominin fossils from the Sangiran dome in Java, which are dated to about 1.5–1.6 Ma4. Artefacts from Majuangou III5 and Shangshazui6 in the Nihewan basin, north China, have also been dated to 1.6–1.7 Ma. Here we report an Early Pleistocene and largely continuous artefact sequence from Shangchen, which is a newly discovered Palaeolithic locality of the southern Chinese Loess Plateau, near Gongwangling in Lantian county. The site contains 17 artefact layers that extend from palaeosol S15—dated to approximately 1.26 Ma—to loess L28, which we date to about 2.12 Ma. This discovery implies that hominins left Africa earlier than indicated by the evidence from Dmanisi. An Early Pleistocene artefact assemblage from the Chinese Loess Plateau indicates that hominins had left Africa by at least 2.1 million years ago, and occupied the Loess Plateau repeatedly for a long time.",
"title": ""
},
{
"docid": "8ebab4a80cdff32082b86b7c698856f0",
"text": "One aim of component-based software engineering (CBSE) is to enable the prediction of extra-functional properties, such as performance and reliability, utilising a well-defined composition theory. Nowadays, such theories and their accompanying prediction methods are still in a maturation stage. Several factors influencing extra-functional properties need additional research to be understood. A special problem in CBSE stems from its specific development process: Software components should be specified and implemented independently from their later context to enable reuse. Thus, extra-functional properties of components need to be specified in a parametric way to take different influencing factors like the hardware platform or the usage profile into account. Our approach uses the Palladio Component Model (PCM) to specify component-based software architectures in a parametric way. This model offers direct support of the CBSE development process by dividing the model creation among the developer roles. This paper presents our model and a simulation tool based on it, which is capable of making performance predictions. Within a case study, we show that the resulting prediction accuracy is sufficient to support the evaluation of architectural design decisions.",
"title": ""
},
{
"docid": "f6342101ff8315bcaad4e4f965e6ba8a",
"text": "In radar imaging it is well known that relative motion or deformation of parts of illuminated objects induce additional features in the Doppler frequency spectra. These features are called micro-Doppler effect and appear as sidebands around the central Doppler frequency. They can provide valuable information about the structure of the moving parts and may be used for identification purposes [1].",
"title": ""
},
{
"docid": "24006b9eb670c84904b53320fbedd32c",
"text": "Maturity Models have been introduced, over the last four decades, as guides and references for Information System management in organizations from different sectors of activity. In the healthcare field, Maturity Models have also been used to deal with the enormous complexity and demand of Hospital Information Systems. This article presents a research project that aimed to develop a new comprehensive model of maturity for a health area. HISMM (Hospital Information System Maturity Model) was developed to address a complexity of SIH and intends to offer a useful tool for the demanding role of its management. The HISMM has the peculiarity of congregating a set of key maturity Influence Factors and respective characteristics, enabling not only the assessment of the global maturity of a HIS but also the individual maturity of its different dimensions. In this article, we present the methodology for the development of Maturity Models adopted for the creation of HISMM and the underlying reasons for its choice.",
"title": ""
},
{
"docid": "d3bdff7b747b5804971534cfbfd2ce53",
"text": "The consequences of security problems are increasingly serious. These problems can now lead to personal injury, prolonged downtime and irreparable damage to capital goods. To achieve this, systems require end-to-end security solutions that cover the layers of connectivity, furthermore, guarantee the privatization and protection of data circulated via networks. In this paper, we will give a definition to the Internet of things, try to dissect its architecture (protocols, layers, entities …), thus giving a state of the art of security in the field of internet of things (Faults detected in each layer …), finally, mention the solutions proposed until now to help researchers start their researches on internet of things security subject.",
"title": ""
},
{
"docid": "ee0819baef1a64702ef4b6b93564ed75",
"text": "Solitary pigmented melanocytic intraoral lesions of the oral cavity are rare. Oral nevus is a congenital or acquired benign neoplasm. Oral compound nevus constitutes 5.9%-16.5% of all oral melanocytic nevi. The oral compound nevus is commonly seen on hard palate and buccal mucosa and rarely on other intraoral sites. The objective of this article is to present a rare case report of oral compound nevus in the retromolar pad region along with a review of literature. A 22 year old female reported with a solitary black pigmented papule at retromolar pad region which was surgically removed and microscopic investigation confirmed the diagnosis of oral compound nevus.",
"title": ""
},
{
"docid": "8a77ab964896d3fea327e76b2efad8ef",
"text": "We present the fundamental ideas underlying statistical hypothesis testing using the frequentist framework. We start with a simple example that builds up the one-sample t-test from the beginning, explaining important concepts such as the sampling distribution of the sample mean, and the iid assumption. Then we examine the meaning of the p-value in detail, and discuss several important misconceptions about what a p-value does and does not tell us. This leads to a discussion of Type I, II error and power, and Type S and M error. An important conclusion from this discussion is that one should aim to carry out appropriately powered studies. Next, we discuss two common issues we have encountered in psycholinguistics and linguistics: running experiments until significance is reached, and the “garden-of-forking-paths” problem discussed by Gelman and others. The best way to use frequentist methods is to run appropriately powered studies, check model assumptions, clearly separate exploratory data analysis from planned comparisons decided upon before the study was run, and always attempt to replicate results.",
"title": ""
},
{
"docid": "db78855cfd464e54f6aafdce8b412a2f",
"text": "Agent is not only the core concept of complexity theory, but the most elementary component in implementing knowledge management systems. This article, based on the theory of complexity and combined the obtained research results, discusses the definition, structure, composition of different agents. It also concerns the relationship among agents in knowledge management and the action mode of multiple agents.",
"title": ""
},
{
"docid": "a064ad01edd6a369d939736e04831e50",
"text": "Asthma is frequently undertreated, resulting in a relatively high prevalence of patients with uncontrolled disease, characterized by the presence of symptoms and risk of adverse outcomes. Patients with uncontrolled asthma have a higher risk of morbidity and mortality, underscoring the importance of identifying uncontrolled disease and modifying management plans to improve control. Several assessment tools exist to evaluate control with various cutoff points and measures, but these tools do not reliably correlate with physiological measures and should be considered a supplement to physiological tests. When attempting to improve control in patients, nonpharmacological interventions should always be attempted before changing or adding pharmacotherapies. Among patients with severe, uncontrolled asthma, individualized treatment based on asthma phenotype and eosinophil presence should be considered. The efficacy of the anti-IgE antibody omalizumab has been well established for patients with allergic asthma, and novel biologic agents targeting IL-5, IL-13, IL-4, and other allergic pathways have been investigated for patients with allergic or eosinophilic asthma. Fevipiprant (a CRTH2 [chemokine receptor homologous molecule expressed on Th2 cells] antagonist) and imatinib (a tyrosine kinase inhibition) are examples of nonbiologic therapies that may be useful for patients with severe, uncontrolled asthma. Incorporation of new and emerging treatment into therapeutic strategies for patients with severe asthma may improve outcomes for this patient population.",
"title": ""
},
{
"docid": "28e0bd104c8654ed9ad007c66bae0461",
"text": "Today, journalist, information analyst, and everyday news consumers are tasked with discerning and fact-checking the news. This task has became complex due to the ever-growing number of news sources and the mixed tactics of maliciously false sources. To mitigate these problems, we introduce the The News Landscape (NELA) Toolkit: an open source toolkit for the systematic exploration of the news landscape. NELA allows users to explore the credibility of news articles using well-studied content-based markers of reliability and bias, as well as, filter and sort through article predictions based on the users own needs. In addition, NELA allows users to visualize the media landscape at different time slices using a variety of features computed at the source level. NELA is built with a modular, pipeline design, to allow researchers to add new tools to the toolkit with ease. Our demo is an early transition of automated news credibility research to assist human fact-checking efforts and increase the understanding of the news ecosystem as a whole. To use this tool, go to http://nelatoolkit.science",
"title": ""
},
{
"docid": "0e120a405e8538c8d46fe0a50463366f",
"text": "Two studies were conducted to investigate the effects of red pepper (capsaicin) on feeding behaviour and energy intake. In the first study, the effects of dietary red pepper added to high-fat (HF) and high-carbohydrate (HC) meals on subsequent energy and macronutrient intakes were examined in thirteen Japanese female subjects. After the ingestion of a standardized dinner on the previous evening, the subjects ate an experimental breakfast (1883 kJ) of one of the following four types: (1) HF; (2) HF and red pepper (10 g); (3) HC; (4) HC and red pepper. Ad libitum energy and macronutrient intakes were measured at lunch-time. The HC breakfast significantly reduced the desire to eat and hunger after breakfast. The addition of red pepper to the HC breakfast also significantly decreased the desire to eat and hunger before lunch. Differences in diet composition at breakfast time did not affect energy and macronutrient intakes at lunch-time. However, the addition of red pepper to the breakfast significantly decreased protein and fat intakes at lunch-time. In Study 2, the effects of a red-pepper appetizer on subsequent energy and macronutrient intakes were examined in ten Caucasian male subjects. After ingesting a standardized breakfast, the subjects took an experimental appetizer (644 kJ) at lunch-time of one of the following two types: (1) mixed diet and appetizer; (2) mixed diet and red-pepper (6 g) appetizer. The addition of red pepper to the appetizer significantly reduced the cumulative ad libitum energy and carbohydrate intakes during the rest of the lunch and in the snack served several hours later. Moreover, the power spectral analysis of heart rate revealed that this effect of red pepper was associated with an increase in the ratio sympathetic: parasympathetic nervous system activity. These results indicate that the ingestion of red pepper decreases appetite and subsequent protein and fat intakes in Japanese females and energy intake in Caucasian males. Moreover, this effect might be related to an increase in sympathetic nervous system activity in Caucasian males.",
"title": ""
},
{
"docid": "73242ddfc886fd767d6689d608918cad",
"text": "The chemical reduction of graphene oxide (GO) typically involves highly toxic reducing agents that are harmful to human health and environment, and complicated surface modification is often needed to avoid aggregation of the reduced GO during reduction process. In this paper, a green and facile strategy is reported for the fabrication of soluble reduced GO. The proposed method is based on the reduction of exfoliated GO in green tea solution by making use of the reducing capability and the aromatic rings of tea polyphenol (TP) that contained in tea solution. The measurements of the resultant graphene confirm the efficient removal of the oxygen-containing groups in GO. The strong interactions between the reduced graphene and the aromatic TPs guarantee the good dispersion of the reduced graphene in both aqueous and a variety of organic solvents. These features endow this green approach with great potential in constructing of various graphene-based materials, especially for high-performance biorelated materials as demonstrated in this study of chitosan/graphene composites.",
"title": ""
},
{
"docid": "d7eca0ca4da72bca2d74d484e4dec8ce",
"text": "Recent studies have shown that the human genome has a haplotype block structure such that it can be divided into discrete blocks of limited haplotype diversity. Patil et al. [6] and Zhang et al. [12] developed algorithms to partition haplotypes into blocks with minimum number of tag SNPs for the entire chromosome. However, it is not clear how to partition haplotypes into blocks with restricted number of SNPs when only limited resources are available. In this paper, we first formulated this problem as finding a block partition with a fixed number of tag SNPs that can cover the maximal percentage of a genome. Then we solved it by two dynamic programming algorithms, which are fairly flexible to take into account the knowledge of functional polymorphism. We applied our algorithms to the published SNP data of human chromosome 21 combining with the functional information of these SNPs and demonstrated the effectiveness of them. Statistical investigation of the relationship between the starting points of a block partition and the coding and non-coding regions illuminated that the SNPs at these starting points are not significantly enriched in coding regions. We also developed an efficient algorithm to find all possible long local maximal haplotypes across a subset of samples. After applying this algorithm to the human chromosome 21 haplotype data, we found that samples with long local haplotypes are not necessarily globally similar.",
"title": ""
},
{
"docid": "93c84b6abfe30ff7355e4efc310b440b",
"text": "Parallel file systems (PFS) are widely-used in modern computing systems to mask the ever-increasing performance gap between computing and data access. PFSs favor large requests, and do not work well for small requests, especially small random requests. Newer Solid State Drives (SSD) have excellent performance on small random data accesses, but also incur a high monetary cost. In this study, we propose a hybrid architecture named the Smart Selective SSD Cache (S4D-Cache), which employs a small set of SSD-based file servers as a selective cache of conventional HDD-based file servers. A novel scheme is introduced to identify performance-critical data, and conduct selective cache admission to fully utilize the hybrid architecture in terms of data-access parallelism and randomness. We have implemented an S4D-Cache under the MPI-IO and PVFS2 parallel file system. Our experiments show that S4D-Cache can significantly improve I/O throughput, and is a promising approach for parallel applications.",
"title": ""
},
{
"docid": "e061e276254cb541826a066dcaf7a460",
"text": "Effective data visualization is a key part of the discovery process in the era of “big data”. It is the bridge between the quantitative content of the data and human intuition, and thus an essential component of the scientific path from data into knowledge and understanding. Visualization is also essential in the data mining process, directing the choice of the applicable algorithms, and in helping to identify and remove bad data from the analysis. However, a high complexity or a high dimensionality of modern data sets represents a critical obstacle. How do we visualize interesting structures and patterns that may exist in hyper-dimensional data spaces? A better understanding of how we can perceive and interact with multidimensional information poses some deep questions in the field of cognition technology and human-computer interaction. To this effect, we are exploring the use of immersive virtual reality platforms for scientific data visualization, both as software and inexpensive commodity hardware. These potentially powerful and innovative tools for multi-dimensional data visualization can also provide an easy and natural path to a collaborative data visualization and exploration, where scientists can interact with their data and their colleagues in the same visual space. Immersion provides benefits beyond the traditional “desktop” visualization tools: it leads to a demonstrably better perception of a datascape geometry, more intuitive data understanding, and a better retention of the perceived relationships in the data.",
"title": ""
},
{
"docid": "9c800a53208bf1ded97e963ed4f80b28",
"text": "We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.",
"title": ""
},
{
"docid": "9f177381c2ba4c6c90faee339910c6c6",
"text": "Behavior genetics has demonstrated that genetic variance is an important component of variation for all behavioral outcomes , but variation among families is not. These results have led some critics of behavior genetics to conclude that heritability is so ubiquitous as to have few consequences for scientific understanding of development , while some behavior genetic partisans have concluded that family environment is not an important cause of developmental outcomes. Both views are incorrect. Genotype is in fact a more systematic source of variability than environment, but for reasons that are methodological rather than substantive. Development is fundamentally nonlinear, interactive, and difficult to control experimentally. Twin studies offer a useful methodologi-cal shortcut, but do not show that genes are more fundamental than environments. The nature-nurture debate is over. The bottom line is that everything is heritable, an outcome that has taken all sides of the nature-nurture debate by surprise. Irving Gottesman and I have suggested that the universal influence of genes on behavior be enshrined as the first law of behavior genetics (Turkheimer & Gottesman, 1991), and at the risk of naming laws that I can take no credit for discovering, it is worth stating the nearly unanimous results of behavior genetics in a more formal manner. ● First Law. All human behavioral traits are heritable. ● Second Law. The effect of being raised in the same family is smaller than the effect of genes. ● Third Law. A substantial portion of the variation in complex human behavioral traits is not accounted for by the effects of genes or families. It is not my purpose in this brief article to defend these three laws against the many exceptions that might be claimed. The point is that now that the empirical facts are in and no longer a matter of serious controversy, it is time to turn attention to what the three laws mean, to the implications of the genetics of behavior for an understanding of complex human behavior and its development. VARIANCE AND CAUSATION IN BEHAVIORAL DEVELOPMENT If the first two laws are taken literally , they seem to herald a great victory for the nature side of the old debate: Genes matter, families do not. To understand why such views are at best an oversimplification of a complex reality, it is necessary to consider the newest wave of opposition that behavior genetics has generated. These new critics , whose most …",
"title": ""
}
] |
scidocsrr
|
17c0b6b7bd2a677f57fe46e35c9b425b
|
Danger Theory Concepts Improving Malware Detection of Intrusion Detection Systems That Uses Exact Graphs
|
[
{
"docid": "a40d11652a42ac6a6bf4368c9665fb3b",
"text": "This paper presents a taxonomy of intrusion detection systems that is then used to survey and classify a number of research prototypes. The taxonomy consists of a classification first of the detection principle, and second of certain operational aspects of the intrusion detection system as such. The systems are also grouped according to the increasing difficulty of the problem they attempt to address. These classifications are used predictively, pointing towards a number of areas of future research in the field of intrusion detection.",
"title": ""
}
] |
[
{
"docid": "9963e1f7126812d9111a4cb6a8eb8dc6",
"text": "The renewed interest in grapheme to phoneme conversion (G2P), due to the need of developing multilingual speech synthesizers and recognizers, suggests new approaches more efficient than the traditional rule&exception ones. A number of studies have been performed to investigate the possible use of machine learning techniques to extract phonetic knowledge in a automatic way starting from a lexicon. In this paper, we present the results of our experiments in this research field. Starting from the state of art, our contribution is in the development of a language-independent learning scheme for G2P based on Classification and Regression Trees (CART). To validate our approach, we realized G2P converters for the following languages: British English, American English, French and Brazilian Portuguese.",
"title": ""
},
{
"docid": "02556d17da9f21251c454ba2bb114aa6",
"text": "Thermal sense plays an important role in object recognition. The idea to identify materials by heat flow from the skin to the touched materials have been introduced to artificial thermal sensors and proved effective. Moreover, the addition of tactile sensors is found to improve the performance of material identification by obtaining further properties of objects such as roughness and stiffness. However, the application of these sensors to autonomous recognition by a robot is not a well investigated topic. Contact with the objects will vary in such cases, but no previous sensor has shown the ability of material recognition robust to change in contact conditions with the required size and softness. The limitation is partly due to the design of current thermal sensors where the skin and the sensing devices are separated. This paper introduces a soft anthropomorphic finger with both thermal and tactile sensing elements embedded in soft polyurethane material. Experimental results show the ability to discriminate materials robust to variety in contact condition by virtue of the deformable tissue and compensation by the tactile sensors. Recognition robust to change of temperature is also realized by an additional thermal sensor.",
"title": ""
},
{
"docid": "49e1d016e1aae07d5e3ae1ad0e96e662",
"text": "Recently, various protocols have been proposed for securely outsourcing database storage to a third party server, ranging from systems with \"full-fledged\" security based on strong cryptographic primitives such as fully homomorphic encryption or oblivious RAM, to more practical implementations based on searchable symmetric encryption or even on deterministic and order-preserving encryption. On the flip side, various attacks have emerged that show that for some of these protocols confidentiality of the data can be compromised, usually given certain auxiliary information. We take a step back and identify a need for a formal understanding of the inherent efficiency/privacy trade-off in outsourced database systems, independent of the details of the system. We propose abstract models that capture secure outsourced storage systems in sufficient generality, and identify two basic sources of leakage, namely access pattern and ommunication volume. We use our models to distinguish certain classes of outsourced database systems that have been proposed, and deduce that all of them exhibit at least one of these leakage sources.\n We then develop generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked. These attacks are in a rather weak passive adversarial model, where the untrusted server knows only the underlying query distribution. In particular, to perform our attack the server need not have any prior knowledge about the data, and need not know any of the issued queries nor their results. Yet, the server can reconstruct the secret attribute of every record in the database after about $N^4$ queries, where N is the domain size. We provide a matching lower bound showing that our attacks are essentially optimal. Our reconstruction attacks using communication volume apply even to systems based on homomorphic encryption or oblivious RAM in the natural way.\n Finally, we provide experimental results demonstrating the efficacy of our attacks on real datasets with a variety of different features. On all these datasets, after the required number of queries our attacks successfully recovered the secret attributes of every record in at most a few seconds.",
"title": ""
},
{
"docid": "dd54483344a58ec7822237d1a222d67e",
"text": "It is widely recognized that the risk of fractures is closely related to the typical decline in bone mass during the ageing process in both women and men. Exercise has been reported as one of the best non-pharmacological ways to improve bone mass throughout life. However, not all exercise regimens have the same positive effects on bone mass, and the studies that have evaluated the role of exercise programmes on bone-related variables in elderly people have obtained inconclusive results. This systematic review aims to summarize and update present knowledge about the effects of different types of training programmes on bone mass in older adults and elderly people as a starting point for developing future interventions that maintain a healthy bone mass and higher quality of life in people throughout their lifetime. A literature search using MEDLINE and the Cochrane Central Register of Controlled Trials databases was conducted and bibliographies for studies discussing the effect of exercise interventions in older adults published up to August 2011 were examined. Inclusion criteria were met by 59 controlled trials, 7 meta-analyses and 8 reviews. The studies included in this review indicate that bone-related variables can be increased, or at least the common decline in bone mass during ageing attenuated, through following specific training programmes. Walking provides a modest increase in the loads on the skeleton above gravity and, therefore, this type of exercise has proved to be less effective in osteoporosis prevention. Strength exercise seems to be a powerful stimulus to improve and maintain bone mass during the ageing process. Multi-component exercise programmes of strength, aerobic, high impact and/or weight-bearing training, as well as whole-body vibration (WBV) alone or in combination with exercise, may help to increase or at least prevent decline in bone mass with ageing, especially in postmenopausal women. This review provides, therefore, an overview of intervention studies involving training and bone measurements among older adults, especially postmenopausal women. Some novelties are that WBV training is a promising alternative to prevent bone fractures and osteoporosis. Because this type of exercise under prescription is potentially safe, it may be considered as a low impact alternative to current methods combating bone deterioration. In other respects, the ability of peripheral quantitative computed tomography (pQCT) to assess bone strength and geometric properties may prove advantageous in evaluating the effects of training on bone health. As a result of changes in bone mass becoming evident by pQCT even when dual energy X-ray absortiometry (DXA) measurements were unremarkable, pQCT may provide new knowledge about the effects of exercise on bone that could not be elucidated by DXA. Future research is recommended including longest-term exercise training programmes, the addition of pQCT measurements to DXA scanners and more trials among men, including older participants.",
"title": ""
},
{
"docid": "4933f3f3007dab687fc852e9c2b1ab0a",
"text": "This paper presents a topology for bidirectional solid-state transformers with a minimal device count. The topology, referenced as dynamic-current or Dyna-C, has two current-source inverter stages with a high-frequency galvanic isolation, requiring 12 switches for four-quadrant three-phase ac/ac power conversion. The topology has voltage step-up/down capability, and the input and output can have arbitrary power factors and frequencies. Further, the Dyna-C can be configured as isolated power converters for single- or multiterminal dc, and single- or multiphase ac systems. The modular nature of the Dyna-C lends itself to be connected in series and/or parallel for high-voltage high-power applications. The proposed converter topology can find a broad range of applications such as isolated battery chargers, uninterruptible power supplies, renewable energy integration, smart grid, and power conversion for space-critical applications including aviation, locomotives, and ships. This paper outlines various configurations of the Dyna-C, as well as the relative operation and controls. The converter functionality is validated through simulations and experimental measurements of a 50-kVA prototype.",
"title": ""
},
{
"docid": "766c723d00ac15bf31332c8ab4b89b63",
"text": "For those people without artistic talent, they can only draw rough or even awful doodles to express their ideas. We propose a doodle beautification system named Doodle Master, which can transfer a rough doodle to a plausible image and also keep the semantic concepts of the drawings. The Doodle Master applies the VAE/GAN model to decode and generate the beautified result from a constrained latent space. To achieve better performance for sketch data which is more like discrete distribution, a shared-weight method is proposed to improve the learnt features of the discriminator with the aid of the encoder. Furthermore, we design an interface for the user to draw with basic drawing tools and adjust the number of reconstruction times. The experiments show that the proposed Doodle Master system can successfully beautify the rough doodle or sketch in real-time.",
"title": ""
},
{
"docid": "36468fcf38ad4314c270890c5cdf4f46",
"text": "With the rapidly growing amount of biomedical literature it becomes increasingly difficult to find relevant information quickly and reliably. In this study we applied the word2vec deep learning toolkit to medical corpora to test its potential for improving the accessibility of medical knowledge. We evaluated the efficiency of word2vec in identifying properties of pharmaceuticals based on mid-sized, unstructured medical text corpora without any additional background knowledge. Properties included relationships to diseases ('may treat') or physiological processes ('has physiological effect'). We evaluated the relationships identified by word2vec through comparison with the National Drug File - Reference Terminology (NDF-RT) ontology. The results of our first evaluation were mixed, but helped us identify further avenues for employing deep learning technologies in medical information retrieval, as well as using them to complement curated knowledge captured in ontologies and taxonomies.",
"title": ""
},
{
"docid": "2bd93dcbc1dad25206059c9d3a7f6f75",
"text": "We explore a novel approach for Semantic Role Labeling (SRL) by casting it as a sequence-to-sequence process. We employ an attention-based model enriched with a copying mechanism to ensure faithful regeneration of the input sequence, while enabling interleaved generation of argument role labels. Here, we apply this model in a monolingual setting, performing PropBank SRL on English language data. The constrained sequence generation set-up enforced with the copying mechanism allows us to analyze the performance and special properties of the model on manually labeled data and benchmarking against state-of-the-art sequence labeling models. We show that our model is able to solve the SRL argument labeling task on English data, yet further structural decoding constraints will need to be added to make the model truly competitive. Our work represents a first step towards more advanced, generative SRL labeling setups.",
"title": ""
},
{
"docid": "209e447e4bd7b9e4640548116d968662",
"text": "Color vision deficiency (CVD) affects more than 4% of the population and leads to a different visual perception of colors. Though this has been known for decades, colormaps with many colors across the visual spectra are often used to represent data, leading to the potential for misinterpretation or difficulty with interpretation by someone with this deficiency. Until the creation of the module presented here, there were no colormaps mathematically optimized for CVD using modern color appearance models. While there have been some attempts to make aesthetically pleasing or subjectively tolerable colormaps for those with CVD, our goal was to make optimized colormaps for the most accurate perception of scientific data by as many viewers as possible. We developed a Python module, cmaputil, to create CVD-optimized colormaps, which imports colormaps and modifies them to be perceptually uniform in CVD-safe colorspace while linearizing and maximizing the brightness range. The module is made available to the science community to enable others to easily create their own CVD-optimized colormaps. Here, we present an example CVD-optimized colormap created with this module that is optimized for viewing by those without a CVD as well as those with red-green colorblindness. This colormap, cividis, enables nearly-identical visual-data interpretation to both groups, is perceptually uniform in hue and brightness, and increases in brightness linearly.",
"title": ""
},
{
"docid": "f0e35100617a7e34a04e43d6bee8db9d",
"text": "This paper presents a pruned sets method (PS) for multi-label classification. It is centred on the concept of treating sets of labels as single labels. This allows the classification process to inherently take into account correlations between labels. By pruning these sets, PS focuses only on the most important correlations, which reduces complexity and improves accuracy. By combining pruned sets in an ensemble scheme (EPS), new label sets can be formed to adapt to irregular or complex data. The results from experimental evaluation on a variety of multi-label datasets show that [E]PS can achieve better performance and train much faster than other multi-label methods.",
"title": ""
},
{
"docid": "08d87fbc4a7f83f451707aef6f6b0342",
"text": "This paper presents ZeroN, a new tangible interface element that can be levitated and moved freely by computer in a three dimensional space. ZeroN serves as a tangible rep-resentation of a 3D coordinate of the virtual world through which users can see, feel, and control computation. To ac-complish this, we developed a magnetic control system that can levitate and actuate a permanent magnet in a pre-defined 3D volume. This is combined with an optical tracking and display system that projects images on the levitating object. We present applications that explore this new interaction modality. Users are invited to place or move the ZeroN object just as they can place objects on surfaces. For example, users can place the sun above physical objects to cast digital shadows, or place a planet that will start revolving based on simulated physical conditions. We describe the technology and interaction scenarios, discuss initial observations, and outline future development.",
"title": ""
},
{
"docid": "3d335bfc7236ea3596083d8cae4f29e3",
"text": "OBJECTIVE\nTo summarise the applications and appropriate use of Dietary Reference Intakes (DRIs) as guidance for nutrition and health research professionals in the dietary assessment of groups and individuals.\n\n\nDESIGN\nKey points from the Institute of Medicine report, Dietary Reference Intakes: Applications in Dietary Assessment, are summarised in this paper. The different approaches for using DRIs to evaluate the intakes of groups vs. the intakes of individuals are highlighted.\n\n\nRESULTS\nEach of the new DRIs is defined and its role in the dietary assessment of groups and individuals is described. Two methods of group assessment and a new method for quantitative assessment of individuals are described. Illustrations are provided on appropriate use of the Estimated Average Requirement (EAR), the Adequate Intake (AI) and the Tolerable Upper Intake Level (UL) in dietary assessment.\n\n\nCONCLUSIONS\nDietary assessment of groups or individuals must be based on estimates of usual (long-term) intake. The EAR is the appropriate DRI to use in assessing groups and individuals. The AI is of limited value in assessing nutrient adequacy, and cannot be used to assess the prevalence of inadequacy. The UL is the appropriate DRI to use in assessing the proportion of a group at risk of adverse health effects. It is inappropriate to use the Recommended Dietary Allowance (RDA) or a group mean intake to assess the nutrient adequacy of groups.",
"title": ""
},
{
"docid": "3cda92028692a25411d74e5a002740ac",
"text": "Protecting sensitive information from unauthorized disclosure is a major concern of every organization. As an organization’s employees need to access such information in order to carry out their daily work, data leakage detection is both an essential and challenging task. Whether caused by malicious intent or an inadvertent mistake, data loss can result in significant damage to the organization. Fingerprinting is a content-based method used for detecting data leakage. In fingerprinting, signatures of known confidential content are extracted and matched with outgoing content in order to detect leakage of sensitive content. Existing fingerprinting methods, however, suffer from two major limitations. First, fingerprinting can be bypassed by rephrasing (or minor modification) of the confidential content, and second, usually the whole content of document is fingerprinted (including non-confidential parts), resulting in false alarms. In this paper we propose an extension to the fingerprinting approach that is based on sorted k-skip-n-grams. The proposed method is able to produce a fingerprint of the core confidential content which ignores non-relevant (non-confidential) sections. In addition, the proposed fingerprint method is more robust to rephrasing and can also be used to detect a previously unseen confidential document and therefore provide better detection of intentional leakage incidents.",
"title": ""
},
{
"docid": "57416ef0f8ec577433898fb1a9e46bee",
"text": "New types of synthetic cannabinoid designer drugs are constantly introduced to the illicit drug market to circumvent legislation. Recently, N-(1-Adamantyl)-1-(5-fluoropentyl)-1H-indazole-3-carboxamide (5F-AKB-48), also known as 5F-APINACA, was identified as an adulterant in herbal products. This compound deviates from earlier JHW-type synthetic cannabinoids by having an indazole ring connected to an adamantyl group via a carboxamide linkage. Synthetic cannabinoids are completely metabolized, and identification of the metabolites is thus crucial when using urine as the sample matrix. Using an authentic urine sample and high-resolution accurate-mass Fourier transform Orbitrap mass spectrometry, we identified 16 phase-I metabolites of 5F-AKB-48. The modifications included mono-, di-, and trihydroxylation on the adamantyl ring alone or in combination with hydroxylation on the N-fluoropentylindazole moiety, dealkylation of the N-fluoropentyl side chain, and oxidative loss of fluorine as well as combinations thereof. The results were compared to human liver microsomal (HLM) incubations, which predominantly showed time-dependent formation of mono-, di-, and trihydroxylated metabolites having the hydroxyl groups on the adamantyl ring. The results presented here may be used to select metabolites specific of 5F-AKB-48 for use in clinical and forensic screening.",
"title": ""
},
{
"docid": "19067b3d0f951bad90c80688371532fc",
"text": "Research in Artificial Intelligence is breaking technology barriers every day. New algorithms and high performance computing are making things possible which we could only have imagined earlier. Though the enhancements in AI are making life easier for human beings day by day, there is constant fear that AI based systems will pose a threat to humanity. People in AI community have diverse set of opinions regarding the pros and cons of AI mimicking human behavior. Instead of worrying about AI advancements, we propose a novel idea of cognitive agents, including both human and machines, living together in a complex adaptive ecosystem, collaborating on human computation for producing essential social goods while promoting sustenance, survival and evolution of the agents’ life cycle. We highlight several research challenges and technology barriers in achieving this goal. We propose a governance mechanism around this ecosystem to ensure ethical behaviors of all cognitive agents. Along with a novel set of use-cases of Cogniculture , we discuss the road map ahead",
"title": ""
},
{
"docid": "afeacb3127199fff7d5a5212ab5d6af2",
"text": "OBJECTIVE\nStudies have suggested that even a single session of physical exercise enhances executive functions. ADHD is among the most common developmental disorders in childhood, but little is known about alternative treatments for this disorder. Therefore, we performed a systematic review of the literature to analyze articles that evaluated the executive functions of children with ADHD after an acute exercise session.\n\n\nMETHOD\nWe reviewed articles indexed in the PubMed, American Psychiatric Association (APA) psychNET, Scopus, and Web of Knowledge databases between 1980 and 2013.\n\n\nRESULTS\nOf 231 articles selected, only three met the inclusion criteria.\n\n\nCONCLUSION\nBased on these 3 articles, we concluded that 30 min of physical exercise reportedly improved the executive functions of children with ADHD. Due to the small number of articles selected, further studies are needed to confirm these benefits.",
"title": ""
},
{
"docid": "376bac86251e8a1f8bc0b3af2629f900",
"text": "The security of software systems can be threatened by many internal and external threats, including data leakages due to timing channels. Even if developers manage to avoid security threats in the source code or bytecode during development and testing, new threats can arise as the compiler generates machine codes from representations at the binary code level during execution on the processor or due to operating system specifics. Current approaches either do not allow the neutralization of timing channels to be achieved comprehensively with a sufficient degree of security or require an unjustifiable amount of time and/or resources. Herein, a method is demonstrated for the protected execution of software based on a secure virtual execution environment (VEE) that combines the results from dynamic and static analyses to find timing channels through the application of code transformations. This solution complements other available techniques to prevent timing channels from being exploited. This approach helps control the appearance and neutralization of timing channels via just-in-time code modifications during all stages of program development and usage. This work demonstrates the identification of threats using timing channels as an example. The approach presented herein can be expanded to the neutralization of other types of threats.",
"title": ""
},
{
"docid": "cda6f812328d1a883b0c5938695981fe",
"text": "This paper investigates the problem of weakly-supervised semantic segmentation, where image-level labels are used as weak supervision. Inspired by the successful use of Convolutional Neural Networks (CNNs) for fully-supervised semantic segmentation, we choose to directly train the CNNs over the oversegmented regions of images for weakly-supervised semantic segmentation. Although there are a few studies on CNNs-based weakly-supervised semantic segmentation, they have rarely considered the noise issue, i.e., the initial weak labels (e.g., social tags) may be noisy. To cope with this issue, we thus propose graph-boosted CNNs (GB-CNNs) for weakly-supervised semantic segmentation. In our GB-CNNs, the graph-based model provides the initial supervision for training the CNNs, and then the outcomes of the CNNs are used to retrain the graph-based model. This training procedure is iteratively implemented to boost the results of semantic segmentation. Experimental results demonstrate that the proposed model outperforms the state-of-the-art weakly-supervised methods. More notably, the proposed model is shown to be more robust in the noisy setting for weakly-supervised semantic segmentation.",
"title": ""
},
{
"docid": "11acd265c1d533916b797bd6015b9eef",
"text": "Genetic and anatomical evidence suggests that Homo sapiens arose in Africa between 200 and 100ka, and recent evidence suggests that complex cognition may have appeared between ~164 and 75ka. This evidence directs our focus to Marine Isotope Stage (MIS) 6, when from 195-123ka the world was in a fluctuating but predominantly glacial stage, when much of Africa was cooler and drier, and when dated archaeological sites are rare. Previously we have shown that humans had expanded their diet to include marine resources by ~164ka (±12ka) at Pinnacle Point Cave 13B (PP13B) on the south coast of South Africa, perhaps as a response to these harsh environmental conditions. The associated material culture documents an early use and modification of pigment, likely for symbolic behavior, as well as the production of bladelet stone tool technology, and there is now intriguing evidence for heat treatment of lithics. PP13B also includes a later sequence of MIS 5 occupations that document an adaptation that increasingly focuses on coastal resources. A model is developed that suggests that the combined richness of the Cape Floral Region on the south coast of Africa, with its high diversity and density of geophyte plants and the rich coastal ecosystems of the associated Agulhas Current, combined to provide a stable set of carbohydrate and protein resources for early modern humans along the southern coast of South Africa during this crucial but environmentally harsh phase in the evolution of modern humans. Humans structured their mobility around the use of coastal resources and geophyte abundance and focused their occupation at the intersection of the geophyte rich Cape flora and coastline. The evidence for human occupation relative to the distance to the coastline over time at PP13B is consistent with this model.",
"title": ""
}
] |
scidocsrr
|
f9b7547746046886ca65804f7ffe1405
|
ASPIER: An Automated Framework for Verifying Security Protocol Implementations
|
[
{
"docid": "2a60bb7773d2e5458de88d2dc0e78e54",
"text": "Many system errors do not emerge unless some intricate sequence of events occurs. In practice, this means that most systems have errors that only trigger after days or weeks of execution. Model checking [4] is an effective way to find such subtle errors. It takes a simplified description of the code and exhaustively tests it on all inputs, using techniques to explore vast state spaces efficiently. Unfortunately, while model checking systems code would be wonderful, it is almost never done in practice: building models is just too hard. It can take significantly more time to write a model than it did to write the code. Furthermore, by checking an abstraction of the code rather than the code itself, it is easy to miss errors.The paper's first contribution is a new model checker, CMC, which checks C and C++ implementations directly, eliminating the need for a separate abstract description of the system behavior. This has two major advantages: it reduces the effort to use model checking, and it reduces missed errors as well as time-wasting false error reports resulting from inconsistencies between the abstract description and the actual implementation. In addition, changes in the implementation can be checked immediately without updating a high-level description.The paper's second contribution is demonstrating that CMC works well on real code by applying it to three implementations of the Ad-hoc On-demand Distance Vector (AODV) networking protocol [7]. We found 34 distinct errors (roughly one bug per 328 lines of code), including a bug in the AODV specification itself. Given our experience building systems, it appears that the approach will work well in other contexts, and especially well for other networking protocols.",
"title": ""
},
{
"docid": "d1c46994c5cfd59bdd8d52e7d4a6aa83",
"text": "Current software attacks often build on exploits that subvert machine-code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.",
"title": ""
},
{
"docid": "7d634a9abe92990de8cb41a78c25d2cc",
"text": "We present a new automatic cryptographic protocol verifier based on a simple representation of the protocol by Prolog rules, and on a new efficient algorithm that determines whether a fact can be proved from these rules or not. This verifier proves secrecy properties of the protocols. Thanks to its use of unification, it avoids the problem of the state space explosion. Another advantage is that we do not need to limit the number of runs of the protocol to analyze it. We have proved the correctness of our algorithm, and have implemented it. The experimental results show that many examples of protocols of the literature, including Skeme [24], can be analyzed by our tool with very small resources: the analysis takes from less than 0.1 s for simple protocols to 23 s for the main mode of Skeme. It uses less than 2 Mb of memory in our tests.",
"title": ""
}
] |
[
{
"docid": "a61f2e71e0b68d8f4f79bfa33c989359",
"text": "Model-based testing relies on behavior models for the generation of model traces: input and expected output---test cases---for an implementation. We use the case study of an automotive network controller to assess different test suites in terms of error detection, model coverage, and implementation coverage. Some of these suites were generated automatically with and without models, purely at random, and with dedicated functional test selection criteria. Other suites were derived manually, with and without the model at hand. Both automatically and manually derived model-based test suites detected significantly more requirements errors than hand-crafted test suites that were directly derived from the requirements. The number of detected programming errors did not depend on the use of models. Automatically generated model-based test suites detected as many errors as hand-crafted model-based suites with the same number of tests. A sixfold increase in the number of model-based tests led to an 11% increase in detected errors.",
"title": ""
},
{
"docid": "b062222917050f13c3a17e8de53a6abe",
"text": "Exposed to traditional language learning strategies, students will gradually lose interest in and motivation to not only learn English, but also any language or culture. Hence, researchers are seeking technology-based learning strategies, such as digital game-mediated language learning, to motivate students and improve learning performance. This paper synthesizes the findings of empirical studies focused on the effectiveness of digital games in language education published within the last five years. Nine qualitative, quantitative, and mixed-method studies are collected and analyzed in this paper. The review found that recent empirical research was conducted primarily to examine the effectiveness by measuring language learning outcomes, motivation, and interactions. Weak proficiency was found in vocabulary retention, but strong proficiency was present in communicative skills such as speaking. Furthermore, in general, students reported that they are motivated to engage in language learning when digital games are involved; however, the motivation is also observed to be weak due to the design of the game and/or individual differences. The most effective method used to stimulate interaction language learning process seems to be digital games, as empirical studies demonstrate that it effectively promotes language education. However, significant work is still required to provide clear answers with respect to innovative and effective learning practice.",
"title": ""
},
{
"docid": "3f0f97dfa920d8abf795ba7f48904a3a",
"text": "An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where the bot improves its question-answering ability from feedback a teacher gives following its generated responses. We build a simulator that tests various aspects of such learning in a synthetic environment, and introduce models that work in this regime. Finally, real experiments with Mechanical Turk validate the approach.",
"title": ""
},
{
"docid": "d0c4997c611d8759805d33cf1ad9eef1",
"text": "The automatic evaluation of text-based assessment items, such as short answers or essays, is an open and important research challenge. In this paper, we compare several features for the classification of short open-ended responses to questions related to a large first-year health sciences course. These features include a) traditional n-gram models; b) entity URIs (Uniform Resource Identifier) and c) entity mentions extracted using a semantic annotation API; d) entity mention embeddings based on GloVe, and e) entity URI embeddings extracted from Wikipedia. These features are used in combination with classification algorithms to discriminate correct answers from incorrect ones. Our results show that, on average, n-gram features performed the best in terms of precision and entity mentions in terms of f1-score. Similarly, in terms of accuracy, entity mentions and n-gram features performed the best. Finally, features based on dense vector representations such as entity embeddings and mention embeddings obtained the best f1-score for predicting correct answers.",
"title": ""
},
{
"docid": "14636b427ecdab0b0bc73c1948eb8a08",
"text": "We review research related to the learning of complex motor skills with respect to principles developed on the basis of simple skill learning. Although some factors seem to have opposite effects on the learning of simple and of complex skills, other factors appear to be relevant mainly for the learning of more complex skills. We interpret these apparently contradictory findings as suggesting that situations with low processing demands benefit from practice conditions that increase the load and challenge the performer, whereas practice conditions that result in extremely high load should benefit from conditions that reduce the load to more manageable levels. The findings reviewed here call into question the generalizability of results from studies using simple laboratory tasks to the learning of complex motor skills. They also demonstrate the need to use more complex skills in motor-learning research in order to gain further insights into the learning process.",
"title": ""
},
{
"docid": "7f9640bc22241bb40154bedcfda33655",
"text": "This project aims to detect possible anomalies in the resource consumption of radio base stations within the 4G LTE Radio architecture. This has been done by analyzing the statistical data that each node generates every 15 minutes, in the form of \"performance maintenance counters\". In this thesis, we introduce methods that allow resources to be automatically monitored after software updates, in order to detect any anomalies in the consumption patterns of the different resources compared to the reference period before the update. Additionally, we also attempt to narrow down the origin of anomalies by pointing out parameters potentially linked to the issue.",
"title": ""
},
{
"docid": "e43a39af20f2e905d0bdb306235c622a",
"text": "This paper presents a fully integrated remotely powered and addressable radio frequency identification (RFID) transponder working at 2.45 GHz. The achieved operating range at 4 W effective isotropically radiated power (EIRP) base-station transmit power is 12 m. The integrated circuit (IC) is implemented in a 0.5 /spl mu/m silicon-on-sapphire technology. A state-of-the-art rectifier design achieving 37% of global efficiency is embedded to supply energy to the transponder. The necessary input power to operate the transponder is about 2.7 /spl mu/W. Reader to transponder communication is obtained using on-off keying (OOK) modulation while transponder to reader communication is ensured using the amplitude shift keying (ASK) backscattering modulation technique. Inductive matching between the antenna and the transponder IC is used to further optimize the operating range.",
"title": ""
},
{
"docid": "5109aa9328094af5e552ed1cab62f09a",
"text": "In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skeletal joint locations from Kinect depth maps using Shotton et al.'s method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action3D dataset and our algorithm outperforms Li et al. [25] on most of the cases.",
"title": ""
},
{
"docid": "cebb70761a891fd1bce7402c10e7266c",
"text": "Abstract: A new approach for mobility, providing an alternative to the private passenger car, by offering the same flexibility but with much less nuisances, is emerging, based on fully automated electric vehicles. A fleet of such vehicles might be an important element in a novel individual, door-to-door, transportation system to the city of tomorrow. For fully automated operation, trajectory planning methods that produce smooth trajectories, with low associated accelerations and jerk, for providing passenger ́s comfort, are required. This paper addresses this problem proposing an approach that consists of introducing a velocity planning stage to generate adequate time sequences for usage in the interpolating curve planners. Moreover, the generated speed profile can be merged into the trajectory for usage in trajectory-tracking tasks like it is described in this paper, or it can be used separately (from the generated 2D curve) for usage in pathfollowing tasks. Three trajectory planning methods, aided by the speed profile planning, are analysed from the point of view of passengers' comfort, implementation easiness, and trajectory tracking.",
"title": ""
},
{
"docid": "5d8fc02f96206da7ccb112866951d4c7",
"text": "Immersive technologies such as augmented reality devices are opening up a new design space for the visual analysis of data. This paper studies the potential of an augmented reality environment for the purpose of collaborative analysis of multidimensional, abstract data. We present ART, a collaborative analysis tool to visualize multidimensional data in augmented reality using an interactive, 3D parallel coordinates visualization. The visualization is anchored to a touch-sensitive tabletop, benefiting from well-established interaction techniques. The results of group-based, expert walkthroughs show that ART can facilitate immersion in the data, a fluid analysis process, and collaboration. Based on the results, we provide a set of guidelines and discuss future research areas to foster the development of immersive technologies as tools for the collaborative analysis of multidimensional data.",
"title": ""
},
{
"docid": "36acc76d232f2f58fcb6b65a1d4027aa",
"text": "Surface measurements of the ear are needed to assess damage in patients with disfigurement or defects of the ears and face. Population norms are useful in calculating the amount of tissue needed to rebuild the ear to adequate size and natural position. Anthropometry proved useful in defining grades of severe, moderate, and mild microtia in 73 patients with various facial syndromes. The division into grades was based on the amount of tissue lost and the degree of asymmetry in the position of the ears. Within each grade the size and position of the ears varied greatly. In almost one-third, the nonoperated microtic ears were symmetrically located, promising the best aesthetic results with the least demanding surgical procedures. In slightly over one-third, the microtic ears were associated with marked horizontal and vertical asymmetries. In cases of horizontal and vertical dislocation exceeding 20 mm, surgical correction of the defective facial framework should precede the building up of a new ear. Data on growth and age of maturation of the ears in the normal population can be useful in choosing the optimal time for ear reconstruction.",
"title": ""
},
{
"docid": "2ae58def943d1ae34e1c62663900d64a",
"text": "This document outlines a method for implementing an eye tracking device as a method of electrical wheelchair control. Through the use of measured gaze points, it is possible to translate a desired movement into a physical one. This form of interface does not only provide a form of transportation for those with severe disability but also allow the user to get a sense of control back into their lives.",
"title": ""
},
{
"docid": "518e0713115bcaac6efc087d4107d95c",
"text": "This paper introduces a device and needed signal processing for high-resolution acoustic imaging in air. The device employs off the shelf audio hardware and linear frequency modulated (LFM) pulse waveform. The image formation is based on the principle of synthetic aperture. The proposed implementation uses inverse filtering method with a unique kernel function for each pixel and focuses a synthetic aperture with no approximations. The method is solid for both far-field and near-field and easily adaptable for different synthetic aperture formation geometries. The proposed imaging is demonstrated via an inverse synthetic aperture formation where the object rotation by a stepper motor provides the required change in aspect angle. Simulated and empirical results are presented. Measurements have been done using a conventional speaker and microphones in an ordinary room with near-field distance and strong static echoes present. The resulting high-resolution 2-D spatial distribution of the acoustic reflectivity provides valuable information for many applications such as object recognition.",
"title": ""
},
{
"docid": "01288eefbf2bc0e8c9dc4b6e0c6d70e9",
"text": "The latest discoveries on diseases and their diagnosis/treatment are mostly disseminated in the form of scientific publications. However, with the rapid growth of the biomedical literature and a high level of variation and ambiguity in disease names, the task of retrieving disease-related articles becomes increasingly challenging using the traditional keywordbased approach. An important first step for any disease-related information extraction task in the biomedical literature is the disease mention recognition task. However, despite the strong interest, there has not been enough work done on disease name identification, perhaps because of the difficulty in obtaining adequate corpora. Towards this aim, we created a large-scale disease corpus consisting of 6900 disease mentions in 793 PubMed citations, derived from an earlier corpus. Our corpus contains rich annotations, was developed by a team of 12 annotators (two people per annotation) and covers all sentences in a PubMed abstract. Disease mentions are categorized into Specific Disease, Disease Class, Composite Mention and Modifier categories. When used as the gold standard data for a state-of-the-art machine-learning approach, significantly higher performance can be found on our corpus than the previous one. Such characteristics make this disease name corpus a valuable resource for mining disease-related information from biomedical text. The NCBI corpus is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Fe llows/Dogan/disease.html.",
"title": ""
},
{
"docid": "99f66f4ff6a8548a4cbdac39d5f54cc4",
"text": "Dissolution tests that can predict the in vivo performance of drug products are usually called biorelevant dissolution tests. Biorelevant dissolution testing can be used to guide formulation development, to identify food effects on the dissolution and bioavailability of orally administered drugs, and to identify solubility limitations and stability issues. To develop a biorelevant dissolution test for oral dosage forms, the physiological conditions in the gastrointestinal (GI) tract that can affect drug dissolution are taken into consideration according to the properties of the drug and dosage form. A variety of biorelevant methods in terms of media and hydrodynamics to simulate the contents and the conditions of the GI tract are presented. The ability of biorelevant dissolution methods to predict in vivo performance and generate successful in vitro–in vivo correlations (IVIVC) for oral formulations are also discussed through several studies.",
"title": ""
},
{
"docid": "cda5c6908b4f52728659f89bb082d030",
"text": "Until a few years ago the diagnosis of hair shaft disorders was based on light microscopy or scanning electron microscopy on plucked or cut samples of hair. Dermatoscopy is a new fast, noninvasive, and cost-efficient technique for easy in-office diagnosis of all hair shaft abnormalities including conditions such as pili trianguli and canaliculi that are not recognizable by examining hair shafts under the light microscope. It can also be used to identify disease limited to the eyebrows or eyelashes. Dermatoscopy allows for fast examination of the entire scalp and is very helpful to identify the affected hair shafts when the disease is focal.",
"title": ""
},
{
"docid": "561320dd717f1a444735dfa322dfbd31",
"text": "IEEE 802.11 based WLAN systems have gained interest to be used in the military and public authority environments, where the radio conditions can be harsh due to intentional jamming. The radio environment can be difficult also in commercial and civilian deployments since the unlicensed frequency bands are crowded. To study these problems, we built a test bed with a controlled signal path to measure the effects of different interfering signals to WLAN communications. We use continuous wideband noise jamming as the point of comparison, and focus on studying the effect of pulsed jamming and frequency sweep jamming. In addition, we consider also medium access control (MAC) interference. Based on the results, WLAN systems do not seem to be sensitive to the tested short noise jamming pulses. Under longer pulses, the effects are seen, and long data frames are more vulnerable to jamming than short ones. In fact, even a small amount of long frames in a data stream can ruin the performance of the whole link. Under frequency sweep jamming, slow sweeps with narrowband jamming signals can be quite harmful to WLAN communications. The results of MAC jamming show significant variation in performance between the different devices: The clear channel assessment (CCA) mechanism of some devices can be jammed very easily by using WLAN-like jamming signals. As a side product, the study also revealed some countermeasures against jamming.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "a920ed7775a73791946eb5610387bc23",
"text": "A limiting factor for photosynthetic organisms is their light-harvesting efficiency, that is the efficiency of their conversion of light energy to chemical energy. Small modifications or variations of chlorophylls allow photosynthetic organisms to harvest sunlight at different wavelengths. Oxygenic photosynthetic organisms usually utilize only the visible portion of the solar spectrum. The cyanobacterium Acaryochloris marina carries out oxygenic photosynthesis but contains mostly chlorophyll d and only traces of chlorophyll a. Chlorophyll d provides a potential selective advantage because it enables Acaryochloris to use infrared light (700-750 nm) that is not absorbed by chlorophyll a. Recently, an even more red-shifted chlorophyll termed chlorophyll f has been reported. Here, we discuss using modified chlorophylls to extend the spectral region of light that drives photosynthetic organisms.",
"title": ""
},
{
"docid": "fb8638c46ca5bb4a46b1556a2504416d",
"text": "In this paper we investigate how a VANET-based traffic information system can overcome the two key problems of strictly limited bandwidth and minimal initial deployment. First, we present a domain specific aggregation scheme in order to minimize the required overall bandwidth. Then we propose a genetic algorithm which is able to identify good positions for static roadside units in order to cope with the highly partitioned nature of a VANET in an early deployment stage. A tailored toolchain allows to optimize the placement with respect to an application-centric objective function, based on travel time savings. By means of simulation we assess the performance of the resulting traffic information system and the optimization strategy.",
"title": ""
}
] |
scidocsrr
|
890b3fd88530c8f03a6207188d6a32e7
|
Social LSTM: Human Trajectory Prediction in Crowded Spaces
|
[
{
"docid": "2ea9e1cebaf85f5129a2a5344e02975a",
"text": "We introduce Gaussian process dynamical models (GPDMs) for nonlinear time series analysis, with applications to learning models of human pose and motion from high-dimensional motion capture data. A GPDM is a latent variable model. It comprises a low-dimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. We marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings. This results in a nonparametric model for dynamical systems that accounts for uncertainty in the model. We demonstrate the approach and compare four learning algorithms on human motion capture data, in which each pose is 50-dimensional. Despite the use of small data sets, the GPDM learns an effective representation of the nonlinear dynamics in these spaces.",
"title": ""
},
{
"docid": "d2b163b5a37419cf95d7450a05909008",
"text": "In this paper we develop a Bayesian nonparametric Inverse Reinforcement Learning technique for switched Markov Decision Processes (MDP). Similar to switched linear dynamical systems, switched MDP (sMDP) can be used to represent complex behaviors composed of temporal transitions between simpler behaviors each represented by a standard MDP. We use sticky Hierarchical Dirichlet Process as a nonparametric prior on the sMDP model space, and describe a Markov Chain Monte Carlo method to efficiently learn the posterior given the behavior data. We demonstrate the effectiveness of sMDP models for learning, prediction and classification of complex agent behaviors in a simulated surveillance scenario.",
"title": ""
}
] |
[
{
"docid": "9ee2081e014e2cde151e03a554e09c8e",
"text": "The emerging network slicing paradigm for 5G provides new business opportunities by enabling multi-tenancy support. At the same time, new technical challenges are introduced, as novel resource allocation algorithms are required to accommodate different business models. In particular, infrastructure providers need to implement radically new admission control policies to decide on network slices requests depending on their Service Level Agreements (SLA). When implementing such admission control policies, infrastructure providers may apply forecasting techniques in order to adjust the allocated slice resources so as to optimize the network utilization while meeting network slices' SLAs. This paper focuses on the design of three key network slicing building blocks responsible for (i) traffic analysis and prediction per network slice, (ii) admission control decisions for network slice requests, and (iii) adaptive correction of the forecasted load based on measured deviations. Our results show very substantial potential gains in terms of system utilization as well as a trade-off between conservative forecasting configurations versus more aggressive ones (higher gains, SLA risk).",
"title": ""
},
{
"docid": "b5e170645774a92375a0b83e5c6a9743",
"text": "Obesity is associated with a state of chronic, low-grade inflammation. Two manuscripts in this issue of the JCI (see the related articles beginning on pages 1796 and 1821) now report that obese adipose tissue is characterized by macrophage infiltration and that these macrophages are an important source of inflammation in this tissue. These studies prompt consideration of new models to include a major role for macrophages in the molecular changes that occur in adipose tissue in obesity.",
"title": ""
},
{
"docid": "73577e88b085e9e187328ce36116b761",
"text": "We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax. The results are correct for viewpoints that are static or moving, far away or nearby. Our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: First, it is converted into an ordinary texture using a surprisingly simple 1-D forward transform. The resulting texture is then mapped onto the polygon using standard texture mapping. The 1-D warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-D shape of the displacement surface. The subsequent texture-mapping operation handles the transformation from texture to screen coordinates.",
"title": ""
},
{
"docid": "abdd1406266d7290166eb16b8a5045a9",
"text": "Individualized manufacturing of cars requires kitting: the collection of individual sets of part variants for each car. This challenging logistic task is frequently performed manually by warehouseman. We propose a mobile manipulation robotic system for autonomous kitting, building on the Kuka Miiwa platform which consists of an omnidirectional base, a 7 DoF collaborative iiwa manipulator, cameras, and distance sensors. Software modules for detection and pose estimation of transport boxes, part segmentation in these containers, recognition of part variants, grasp generation, and arm trajectory optimization have been developed and integrated. Our system is designed for collaborative kitting, i.e. some parts are collected by warehouseman while other parts are picked by the robot. To address safe human-robot collaboration, fast arm trajectory replanning considering previously unforeseen obstacles is realized. The developed system was evaluated in the European Robotics Challenge 2, where the Miiwa robot demonstrated autonomous kitting, part variant recognition, and avoidance of unforeseen obstacles.",
"title": ""
},
{
"docid": "655f28b1eeed4c571237474c96ac84a0",
"text": "We present six cases of extra-axial lesions: three meningiomas [including one intraventricular and one cerebellopontine angle (CPA) meningioma], one dural metastasis, one CPA schwannoma and one choroid plexus papilloma which were chosen from a larger cohort of extra-axial tumors evaluated in our institution. Apart from conventional MR examinations, all the patients also underwent perfusion-weighted imaging (PWI) using dynamic susceptibility contrast method on a 1.5 T MR unit (contrast: 0.3 mmol/kg, rate 5 ml/s). Though the presented tumors showed very similar appearance on conventional MR images, they differed significantly in perfusion examinations. The article draws special attention to the usefulness of PWI in the differentiation of various extra-axial tumors and its contribution in reaching final correct diagnoses. Finding a dural lesion with low perfusion parameters strongly argues against the diagnosis of meningioma and should raise a suspicion of a dural metastasis. In cases of CPA tumors, a lesion with low relative cerebral blood volume values should be suspected to be schwannoma, allowing exclusion of meningioma to be made. In intraventricular tumors arising from choroid plexus, low perfusion parameters can exclude a diagnosis of meningioma. In our opinion, PWI as an easy and quick to perform functional technique should be incorporated into the MR protocol of all intracranial tumors including extra-axial neoplasms.",
"title": ""
},
{
"docid": "d99181a13ec133373f7fb40f98ea770d",
"text": "Fisting is an uncommon and potentially dangerous sexual practice. This is usually a homosexual activity, but can also be a heterosexual or an autoerotic practice. A systematic review of the forensic literature yielded 14 published studies from 8 countries between 1968 and 2016 that met the inclusion/exclusion criteria, illustrating that external anogenital (anal and/or genital) trauma due to fisting is observed in 22.2% and 88.8% (reported consensual and non-consensual intercourse, respectively) of the subjects, while internal injuries are observed in the totality of the patients. Establishing the reliability of the conclusions of these studies is difficult due to a lack of uniformity in methodology used to detect and define injuries. Taking this limit into account, the aim of this article is to give a description of the external and internal injuries subsequent to reported consensual and non-consensual fisting practice, and try to find a relation between this sexual practice, the morphology of the injuries, the correlation with the use of drugs, and the relationship with assailant, where possible. The findings reported in this paper could be useful, especially when concerns of sexual assault arise.",
"title": ""
},
{
"docid": "21c1493a2de747f9b5878648ee95d470",
"text": "In this summary of previous work, I argue that data becomes temporarily interesting by itself to some selfimproving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively more “beautiful.” Curiosity is the desire to create or discover more non-random, non-arbitrary, “truly novel,” regular data that allows for compression progress because its regularity was not yet known. This drive maximizes “interestingness,” the first derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and recent artificial systems.",
"title": ""
},
{
"docid": "868c3c6de73d53f54ca6090e9559007f",
"text": "To generate useful summarization of data while maintaining privacy of sensitive information is a challenging task, especially in the big data era. The privacy-preserving principal component algorithm proposed in [1] is a promising approach when a low rank data summarization is desired. However, the analysis in [1] is limited to the case of a single principal component, which makes use of bounds on the vector-valued Bingham distribution in the unit sphere. By exploring the non-commutative structure of data matrices in the full Stiefel manifold, we extend the analysis to an arbitrary number of principal components. Our results are obtained by analyzing the asymptotic behavior of the matrix-variate Bingham distribution using tools from random matrix theory.",
"title": ""
},
{
"docid": "d07416d917175d6bf809c4cefeeb44a3",
"text": "Extracting relevant information in multilingual context from massive amounts of unstructured, structured and semi-structured data is a challenging task. Various theories have been developed and applied to ease the access to multicultural and multilingual resources. This papers describes a methodology for the development of an ontology-based Cross-Language Information Retrieval (CLIR) application and shows how it is possible to achieve the translation of Natural Language (NL) queries in any language by means of a knowledge-driven approach which allows to semi-automatically map natural language to formal language, simplifying and improving in this way the human-computer interaction and communication. The outlined research activities are based on Lexicon-Grammar (LG), a method devised for natural language formalization, automatic textual analysis and parsing. Thanks to its main characteristics, LG is independent from factors which are critical for other approaches, i.e. interaction type (voice or keyboard-based), length of sentences and propositions, type of vocabulary used and restrictions due to users' idiolects. The feasibility of our knowledge-based methodological framework, which allows mapping both data and metadata, will be tested for CLIR by implementing a domain-specific early prototype system.",
"title": ""
},
{
"docid": "2d93bec323bb5e534a1c6256bf324e76",
"text": "MRI has been increasingly used for detailed visualization of the fetus in utero as well as pregnancy structures. Yet, the familiarity of radiologists and clinicians with fetal MRI is still limited. This article provides a practical approach to fetal MR imaging. Fetal MRI is an interactive scanning of the moving fetus owed to the use of fast sequences. Single-shot fast spin-echo (SSFSE) T2-weighted imaging is a standard sequence. T1-weighted sequences are primarily used to demonstrate fat, calcification and hemorrhage. Balanced steady-state free-precession (SSFP), are beneficial in demonstrating fetal structures as the heart and vessels. Diffusion weighted imaging (DWI), MR spectroscopy (MRS), and diffusion tensor imaging (DTI) have potential applications in fetal imaging. Knowing the developing fetal MR anatomy is essential to detect abnormalities. MR evaluation of the developing fetal brain should include recognition of the multilayered-appearance of the cerebral parenchyma, knowledge of the timing of sulci appearance, myelination and changes in ventricular size. With advanced gestation, fetal organs as lungs and kidneys show significant changes in volume and T2-signal. Through a systematic approach, the normal anatomy of the developing fetus is shown to contrast with a wide spectrum of fetal disorders. The abnormalities displayed are graded in severity from simple common lesions to more complex rare cases. Complete fetal MRI is fulfilled by careful evaluation of the placenta, umbilical cord and amniotic cavity. Accurate interpretation of fetal MRI can provide valuable information that helps prenatal counseling, facilitate management decisions, guide therapy, and support research studies.",
"title": ""
},
{
"docid": "5cec29bc44da28160d99530d8813da47",
"text": "There are a variety of application areas in which there is a ne ed for simplifying complex polygonal surface models. These mo dels often have material properties such as colors, textures, an d surface normals. Our surface simplification algorithm, based on ite rative edge contraction and quadric error metrics, can rapidly pro duce high quality approximations of such models. We present a nat ural extension of our original error metric that can account for a wide range of vertex attributes. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—surface and object representat io s",
"title": ""
},
{
"docid": "54a54f09781bc09dccaa6555535099a4",
"text": "Tax revenue has a very important role to fund the State's finances. In order for the optimal tax revenue, the tax authorities must perform tax supervision to the taxpayers optimally. By using the self-assessment taxation system that is taxpayers calculation, pay and report their own tax obligations added with the data of other parties will create a very large data. Therefore, the tax authorities are required to immediately know the taxpayer non-compliance for further audit. This research uses the classification algorithm C4.5, SVM (Support Vector Machine), KNN (K-Nearest Neighbor), Naive Bayes and MLP (Multilayer Perceptron) to classify the level of taxpayer compliance with four goals that are corporate taxpayers comply formally and materially required, corporate taxpayers comply formally required, corporate taxpayers comply materially required and corporate taxpayers not comply formally and materially required. The classification results of each algorithm are compared and the best algorithm chosen based on criteria F-Score, Accuracy and Time taken to build the model by using fuzzy TOPSIS method. The final result shows that C4.5 algorithm is the best algorithm to classify taxpayer compliance level compared to other algorithms.",
"title": ""
},
{
"docid": "e4c493697d9bece8daec6b2dd583e6bb",
"text": "High dimensionality of the feature space is one of the most important concerns in text classification problems due to processing time and accuracy considerations. Selection of distinctive features is therefore essential for text classification. This study proposes a novel filter based probabilistic feature selection method, namely distinguishing feature selector (DFS), for text classification. The proposed method is compared with well-known filter approaches including chi square, information gain, Gini index and deviation from Poisson distribution. The comparison is carried out for different datasets, classification algorithms, and success measures. Experimental results explicitly indicate that DFS offers a competitive performance with respect to the abovementioned approaches in terms of classification accuracy, dimension reduction rate and processing time.",
"title": ""
},
{
"docid": "15208617386aeb77f73ca7c2b7bb2656",
"text": "Multiplication is the basic building block for several DSP processors, Image processing and many other. Over the years the computational complexities of algorithms used in Digital Signal Processors (DSPs) have gradually increased. This requires a parallel array multiplier to achieve high execution speed or to meet the performance demands. A typical implementation of such an array multiplier is Braun design. Braun multiplier is a type of parallel array multiplier. The architecture of Braun multiplier mainly consists of some Carry Save Adders, array of AND gates and one Ripple Carry Adder. In this research work, a new design of Braun Multiplier is proposed and this proposed design of multiplier uses a very fast parallel prefix adder ( Kogge Stone Adder) in place of Ripple Carry Adder. The architecture of standard Braun Multiplier is modified in this work for reducing the delay due to Ripple Carry Adder and performing faster multiplication of two binary numbers. This research also presents a comparative study of FPGA implementation on Spartan2 and Spartartan2E for new multiplier design and standard braun multiplier. The RTL design of proposed new Braun Multiplier and standard braun multiplier is done using Verilog HDL. The simulation is performed using ModelSim. The Xilinx ISE design tool is used for FPGA implementation. Comparative result shows the modified design is effective when compared in terms of delay with the standard design.",
"title": ""
},
{
"docid": "9bb86141611c54978033e2ea40f05b15",
"text": "In this work we investigate the problem of road scene semanti c segmentation using Deconvolutional Networks (DNs). Several c onstraints limit the practical performance of DNs in this context: firstly, the pa ucity of existing pixelwise labelled training data, and secondly, the memory const rai ts of embedded hardware, which rule out the practical use of state-of-theart DN architectures such as fully convolutional networks (FCN). To address the fi rst constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (M DRS3) dataset, aggregating data from six existing densely and sparsely lab elled datasets for training our models, and two existing, separate datasets for test ing their generalisation performance. We show that, while MDRS3 offers a greater volu me and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to over c me this, based on (i) the creation of a best-possible source network (S-Net ) from the aggregated data, ignoring time and memory constraints; and (ii) the tra nsfer of knowledge from S-Net to the memory-efficient target network (T-Net). W e evaluate different techniques for S-Net creation and T-Net transferral, and de monstrate that training a constrained deconvolutional network in this manner can un lock better performance than existing training approaches. Specifically, we s how that a target network can be trained to achieve improved accuracy versus an FC N despite using less than 1% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scar ce o fragmented and where practical constraints exist on the desired model size . We make available our network models and aggregated multi-domain dataset for reproducibility.",
"title": ""
},
{
"docid": "f50d0948319a4487b43b94bac09e5fab",
"text": "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"title": ""
},
{
"docid": "2283e43c2bad5ac682fe185cb2b8a9c1",
"text": "As widely recognized in the literature, information technology (IT) investments have several special characteristics that make assessing their costs and benefits complicated. Here, we address the problem of evaluating a web content management system for both internal and external use. The investment is presently undergoing an evaluation process in a multinational company. We aim at making explicit the desired benefits and expected risks of the system investment. An evaluation hierarchy at general level is constructed. After this, a more detailed hierarchy is constructed to take into account the contextual issues. To catch the contextual issues key company representatives were interviewed. The investment alternatives are compared applying the principles of the Analytic Hierarchy Process (AHP). Due to the subjective and uncertain characteristics of the strategic IT investments a wide range of sensitivity analyses is performed.",
"title": ""
},
{
"docid": "486417082d921eba9320172a349ee28f",
"text": "Circulating tumor cells (CTCs) are a popular topic in cancer research because they can be obtained by liquid biopsy, a minimally invasive procedure with more sample accessibility than tissue biopsy, to monitor a patient's condition. Over the past decades, CTC research has covered a wide variety of topics such as enumeration, profiling, and correlation between CTC number and patient overall survival. It is important to isolate and enrich CTCs before performing CTC analysis because CTCs in the blood stream are very rare (0⁻10 CTCs/mL of blood). Among the various approaches to separating CTCs, here, we review the research trends in the isolation and analysis of CTCs using microfluidics. Microfluidics provides many attractive advantages for CTC studies such as continuous sample processing to reduce target cell loss and easy integration of various functions into a chip, making \"do-everything-on-a-chip\" possible. However, tumor cells obtained from different sites within a tumor exhibit heterogenetic features. Thus, heterogeneous CTC profiling should be conducted at a single-cell level after isolation to guide the optimal therapeutic path. We describe the studies on single-CTC analysis based on microfluidic devices. Additionally, as a critical concern in CTC studies, we explain the use of CTCs in cancer research, despite their rarity and heterogeneity, compared with other currently emerging circulating biomarkers, including exosomes and cell-free DNA (cfDNA). Finally, the commercialization of products for CTC separation and analysis is discussed.",
"title": ""
},
{
"docid": "c2f620287606a2e233e2d3654c64c016",
"text": "Urban terrain is complex and they present a very challenging and difficult environment for simulating virtual forces as well as for rendering. The objective of this work is to research on Binary Space Partition technique (BSP) for modeling urban terrain environments. BSP is a method for recursively subdividing a space into convex sets by hyper-planes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree. Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes.",
"title": ""
}
] |
scidocsrr
|
1a07755c5e5301f6e4313eb427481d39
|
GlyphLens: View-Dependent Occlusion Management in the Interactive Glyph Visualization
|
[
{
"docid": "116b5f129e780a99a1d78ec02a1fb092",
"text": "We present a family of three interactive Context-Aware Selection Techniques (CAST) for the analysis of large 3D particle datasets. For these datasets, spatial selection is an essential prerequisite to many other analysis tasks. Traditionally, such interactive target selection has been particularly challenging when the data subsets of interest were implicitly defined in the form of complicated structures of thousands of particles. Our new techniques SpaceCast, TraceCast, and PointCast improve usability and speed of spatial selection in point clouds through novel context-aware algorithms. They are able to infer a user's subtle selection intention from gestural input, can deal with complex situations such as partially occluded point clusters or multiple cluster layers, and can all be fine-tuned after the selection interaction has been completed. Together, they provide an effective and efficient tool set for the fast exploratory analysis of large datasets. In addition to presenting Cast, we report on a formal user study that compares our new techniques not only to each other but also to existing state-of-the-art selection methods. Our results show that Cast family members are virtually always faster than existing methods without tradeoffs in accuracy. In addition, qualitative feedback shows that PointCast and TraceCast were strongly favored by our participants for intuitiveness and efficiency.",
"title": ""
},
{
"docid": "78b371e7df39a1ebbad64fdee7303573",
"text": "This state of the art report focuses on glyph-based visualization, a common form of visual design where a data set is depicted by a collection of visual objects referred to as glyphs. Its major strength is that patterns of multivariate data involving more than two attribute dimensions can often be more readily perceived in the context of a spatial relationship, whereas many techniques for spatial data such as direct volume rendering find difficult to depict with multivariate or multi-field data, and many techniques for non-spatial data such as parallel coordinates are less able to convey spatial relationships encoded in the data. This report fills several major gaps in the literature, drawing the link between the fundamental concepts in semiotics and the broad spectrum of glyph-based visualization, reviewing existing design guidelines and implementation techniques, and surveying the use of glyph-based visualization in many applications.",
"title": ""
}
] |
[
{
"docid": "c668dd96bbb4247ad73b178a7ba1f921",
"text": "Emotions play a key role in natural language understanding and sensemaking. Pure machine learning usually fails to recognize and interpret emotions in text accurately. The need for knowledge bases that give access to semantics and sentics (the conceptual and affective information) associated with natural language is growing exponentially in the context of big social data analysis. To this end, this paper proposes EmoSenticSpace, a new framework for affective common-sense reasoning that extends WordNet-Affect and SenticNet by providing both emotion labels and polarity scores for a large set of natural language concepts. The framework is built by means of fuzzy c-means clustering and supportvector-machine classification, and takes into account a number of similarity measures, including point-wise mutual information and emotional affinity. EmoSenticSpace was tested on three emotionrelated natural language processing tasks, namely sentiment analysis, emotion recognition, and personality detection. In all cases, the proposed framework outperforms the state-of-the-art. In particular, the direct evaluation of EmoSenticSpace against psychological features provided in the benchmark ISEAR dataset shows a 92.15% agreement. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "42b705c2d8e6acbfe207dd86911b2494",
"text": "OBJECTIVES\nWe reported the interim findings of a randomized controlled trial (RCT) to examine the effects of a mind body physical exercise (Tai Chi) on cognitive function in Chinese subjects at risk of cognitive decline.\n\n\nSUBJECTS\n389 Chinese older persons with either a Clinical Dementia Rating (CDR 0.5) or amnestic-MCI participated in an exercise program. The exercise intervention lasted for 1 year; 171 subjects were trained with 24 forms simplified Tai Chi (Intervention, I) and 218 were trained with stretching and toning exercise (Control, C). The exercise comprised of advised exercise sessions of at least three times per week.\n\n\nRESULTS\nAt 5th months (2 months after completion of training), both I and C subjects showed an improvement in global cognitive function, delayed recall and subjective cognitive complaints (paired t-tests, p < 0.05). Improvements in visual spans and CDR sum of boxes scores were observed in I group (paired t-tests, p < 0.001). Three (2.2%) and 21(10.8%) subjects from the I and C groups progressed to dementia (Pearson chi square = 8.71, OR = 5.34, 95% CI 1.56-18.29). Logistic regression analysis controlled for baseline group differences in education and cognitive function suggested I group was associated with stable CDR (OR = 0.14, 95%CI = 0.03-0.71, p = 0.02).\n\n\nCONCLUSIONS\nOur interim findings showed that Chinese style mind body (Tai Chi) exercise may offer specific benefits to cognition, potential clinical interests should be further explored with longer observation period.",
"title": ""
},
{
"docid": "363c1ecd086043311f16b53b20778d51",
"text": "One recent development of cultural globalization emerges in the convergence of taste in media consumption within geo-cultural regions, such as Latin American telenovelas, South Asian Bollywood films and East Asian trendy dramas. Originating in Japan, the so-called trendy dramas (or idol dramas) have created a craze for Japanese commodities in its neighboring countries (Ko, 2004). Following this Japanese model, Korea has also developed as a stronghold of regional exports, ranging from TV programs, movies and pop music to food, fashion and tourism. The fondness for all things Japanese and Korean in East Asia has been vividly captured by such buzz phrases as Japan-mania (hari in Chinese) and the Korean wave (hallyu in Korean and hanliu in Chinese). These two phenomena underscore how popular culture helps polish the image of a nation and thus strengthens its economic competitiveness in the global market. Consequently, nationbranding has become incorporated into the project of nation-building in light of globalization. However, Japan’s cultural spread and Korea’s cultural expansion in East Asia are often analysed from angles that are polar opposites. Scholars suggest that Japan-mania is initiated by the ardent consumers of receiving countries (Nakano, 2002), while the Korea wave is facilitated by the Korean state in order to boost its culture industry (Ryoo, 2008). Such claims are legitimate but neglect the analogues of these two phenomena. This article examines the parallel paths through which Japan-mania and the Korean wave penetrate into people’s everyday practices in Taiwan – arguably one of the first countries to be swept by these two trends. My aim is to illuminate the processes in which nation-branding is not only promoted by a nation as an international marketing strategy, but also appropriated by a receiving country as a pattern of consumption. Three seemingly contradictory arguments explain why cultural products ‘sell’ across national borders: cultural transparency, cultural difference and hybridization. First, cultural exports targeting the global market are rarely culturally specific so that they allow worldwide audiences to ‘project [into them] indigenous values, beliefs, rites, and rituals’ Media, Culture & Society 33(1) 3 –18 © The Author(s) 2011 Reprints and permission: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0163443710379670 mcs.sagepub.com",
"title": ""
},
{
"docid": "e818b0a38d17a77cc6cfdee2761f12c4",
"text": "In this paper, we present improved lane tracking using vehicle localization. Lane markers are detected using a bank of steerable filters, and lanes are tracked using Kalman filtering. On-road vehicle detection has been achieved using an active learning approach, and vehicles are tracked using a Condensation particle filter. While most state-of-the art lane tracking systems are not capable of performing in high-density traffic scenes, the proposed framework exploits robust vehicle tracking to allow for improved lane tracking in high density traffic. Experimental results demonstrate that lane tracking performance, robustness, and temporal response are significantly improved in the proposed framework, while also tracking vehicles, with minimal additional hardware requirements.",
"title": ""
},
{
"docid": "1b450f4ccaf148dad9d97f4c4b1b78dd",
"text": "Deep neural network models trained on large labeled datasets are the state-of-theart in a large variety of computer vision tasks. In many applications, however, labeled data is expensive to obtain or requires a time consuming manual annotation process. In contrast, unlabeled data is often abundant and available in large quantities. We present a principled framework to capitalize on unlabeled data by training deep generative models on both labeled and unlabeled data. We show that such a combination is beneficial because the unlabeled data acts as a data-driven form of regularization, allowing generative models trained on few labeled samples to reach the performance of fully-supervised generative models trained on much larger datasets. We call our method Hybrid VAE (H-VAE) as it contains both the generative and the discriminative parts. We validate H-VAE on three large-scale datasets of different modalities: two face datasets: (MultiPIE, CelebA) and a hand pose dataset (NYU Hand Pose). Our qualitative visualizations further support improvements achieved by using partial observations.",
"title": ""
},
{
"docid": "1790c02ba32f15048da0f6f4d783aeda",
"text": "In this paper, resource allocation for energy efficient communication in orthogonal frequency division multiple access (OFDMA) downlink networks with large numbers of base station (BS) antennas is studied. Assuming perfect channel state information at the transmitter (CSIT), the resource allocation algorithm design is modeled as a non-convex optimization problem for maximizing the energy efficiency of data transmission (bit/Joule delivered to the users), where the circuit power consumption and a minimum required data rate are taken into consideration. Subsequently, by exploiting the properties of fractional programming, an efficient iterative resource allocation algorithm is proposed to solve the problem. In particular, the power allocation, subcarrier allocation, and antenna allocation policies for each iteration are derived. Simulation results illustrate that the proposed iterative resource allocation algorithm converges in a small number of iterations and unveil the trade-off between energy efficiency and the number of antennas.",
"title": ""
},
{
"docid": "0b97ba6017a7f94ed34330555095f69a",
"text": "In response to stress, the brain activates several neuropeptide-secreting systems. This eventually leads to the release of adrenal corticosteroid hormones, which subsequently feed back on the brain and bind to two types of nuclear receptor that act as transcriptional regulators. By targeting many genes, corticosteroids function in a binary fashion, and serve as a master switch in the control of neuronal and network responses that underlie behavioural adaptation. In genetically predisposed individuals, an imbalance in this binary control mechanism can introduce a bias towards stress-related brain disease after adverse experiences. New candidate susceptibility genes that serve as markers for the prediction of vulnerable phenotypes are now being identified.",
"title": ""
},
{
"docid": "3371fe8778b813360debc384040c510e",
"text": "Medication non-adherence is a major concern in the healthcare industry and has led to increases in health risks and medical costs. For many neurological diseases, adherence to medication regimens can be assessed by observing movement patterns. However, physician observations are typically assessed based on visual inspection of movement and are limited to clinical testing procedures. Consequently, medication adherence is difficult to measure when patients are away from the clinical setting. The authors propose a data mining driven methodology that uses low cost, non-wearable multimodal sensors to model and predict patients' adherence to medication protocols, based on variations in their gait. The authors conduct a study involving Parkinson's disease patients that are \"on\" and \"off\" their medication in order to determine the statistical validity of the methodology. The data acquired can then be used to quantify patients' adherence while away from the clinic. Accordingly, this data-driven system may allow for early warnings regarding patient safety. Using whole-body movement data readings from the patients, the authors were able to discriminate between PD patients on and off medication, with accuracies greater than 97% for some patients using an individually customized model and accuracies of 78% for a generalized model containing multiple patient gait data. The proposed methodology and study demonstrate the potential and effectiveness of using low cost, non-wearable hardware and data mining models to monitor medication adherence outside of the traditional healthcare facility. These innovations may allow for cost effective, remote monitoring of treatment of neurological diseases.",
"title": ""
},
{
"docid": "c01e3b06294f9e84bcc9d493990c6149",
"text": "An integrated CMOS 60 GHz phased-array antenna module supporting symmetrical 32 TX/RX elements for wireless docking is described. Bidirectional architecture with shared blocks, mm-wave TR switch design with less than 1dB TX loss, and a full built in self test (BIST) circuits with 5deg and +/-1dB measurement accuracy of phase and power are presented. The RFIC size is 29mm2, consuming 1.2W/0.85W at TX and RX with a 29dBm EIRP at -19dB EVM and 10dB NF.",
"title": ""
},
{
"docid": "568317c1f18c476de5029d0a1e91438e",
"text": "Plant volatiles (PVs) are lipophilic molecules with high vapor pressure that serve various ecological roles. The synthesis of PVs involves the removal of hydrophilic moieties and oxidation/hydroxylation, reduction, methylation, and acylation reactions. Some PV biosynthetic enzymes produce multiple products from a single substrate or act on multiple substrates. Genes for PV biosynthesis evolve by duplication of genes that direct other aspects of plant metabolism; these duplicated genes then diverge from each other over time. Changes in the preferred substrate or resultant product of PV enzymes may occur through minimal changes of critical residues. Convergent evolution is often responsible for the ability of distally related species to synthesize the same volatile.",
"title": ""
},
{
"docid": "2f17160c9f01aa779b1745a57e34e1aa",
"text": "OBJECTIVE\nTo report an ataxic variant of Alzheimer disease expressing a novel molecular phenotype.\n\n\nDESIGN\nDescription of a novel phenotype associated with a presenilin 1 mutation.\n\n\nSETTING\nThe subject was an outpatient who was diagnosed at the local referral center.\n\n\nPATIENT\nA 28-year-old man presented with psychiatric symptoms and cerebellar signs, followed by cognitive dysfunction. Severe beta-amyloid (Abeta) deposition was accompanied by neurofibrillary tangles and cell loss in the cerebral cortex and by Purkinje cell dendrite loss in the cerebellum. A presenilin 1 gene (PSEN1) S170F mutation was detected.\n\n\nMAIN OUTCOME MEASURES\nWe analyzed the processing of Abeta precursor protein in vitro as well as the Abeta species in brain tissue.\n\n\nRESULTS\nThe PSEN1 S170F mutation induced a 3-fold increase of both secreted Abeta(42) and Abeta(40) species and a 60% increase of secreted Abeta precursor protein in transfected cells. Soluble and insoluble fractions isolated from brain tissue showed a prevalence of N-terminally truncated Abeta species ending at both residues 40 and 42.\n\n\nCONCLUSION\nThese findings define a new Alzheimer disease molecular phenotype and support the concept that the phenotypic variability associated with PSEN1 mutations may be dictated by the Abeta aggregates' composition.",
"title": ""
},
{
"docid": "9cd025b1ae9fde7bf30852377eb11057",
"text": "Lesch-Nyhan syndrome (LNS) is an X-linked recessive disorder resulting from a deficiency of the metabolic enzyme hypozanthine-guanine phosphoribosyltransferase (HPRT). This syndrome presents with abnormal metabolic and neurological manifestations including hyperuricemia, mental retardation*, spastic cerebral palsy (CP), dystonia, and self-mutilation. The mechanism behind the severe self-mutilating behavior exhibited by patients with LNS is unknown and remains one of the greatest obstacles in providing care to these patients. This report describes a 10-year-old male child with confirmed LNS who was treated for self-mutilation of his hands, tongue, and lips with repeated botulinum toxin A (BTX-A) injections into the bilateral masseters. Our findings suggest that treatment with BTX-A affects both the central and peripheral nervous systems, resulting in reduced self-abusive behavior in this patient.",
"title": ""
},
{
"docid": "3e805d6724dc400d681b3b42393d5ebe",
"text": "This paper introduces a framework for conducting and writing an effective literature review. The target audience for the framework includes information systems (IS) doctoral students, novice IS researchers, and other IS researchers who are constantly struggling with the development of an effective literature-based foundation for a proposed research. The proposed framework follows the systematic data processing approach comprised of three major stages: 1) inputs (literature gathering and screening), 2) processing (following Bloom’s Taxonomy), and 3) outputs (writing the literature review). This paper provides the rationale for developing a solid literature review including detailed instructions on how to conduct each stage of the process proposed. The paper concludes by providing arguments for the value of an effective literature review to IS research.",
"title": ""
},
{
"docid": "5706b4955db81d04398fd6a64eb70c7c",
"text": "The number of applications (or apps) in the Android Market exceeded 450,000 in 2012 with more than 11 billion total downloads. The necessity to fix bugs and add new features leads to frequent app updates. For each update, a full new version of the app is downloaded to the user's smart phone; this generates significant traffic in the network. We propose to use delta encoding algorithms and to download only the difference between two versions of an app. We implement delta encoding for Android using the bsdiff and bspatch tools and evaluate its performance. We show that app update traffic can be reduced by about 50%, this can lead to significant cost and energy savings.",
"title": ""
},
{
"docid": "156b2c39337f4fe0847b49fa86dc094b",
"text": "The paper attempts to describe the space of possible mind designs by first equating all minds to software. Next it proves some properties of the mind design space such as infinitude of minds, size and representation complexity of minds. A survey of mind design taxonomies is followed by a proposal for a new field of investigation devoted to study of minds, intellectology.",
"title": ""
},
{
"docid": "59c16bb2ec81dfb0e27ff47ccae0a169",
"text": "A geometric dissection is a set of pieces which can be assembled in different ways to form distinct shapes. Dissections are used as recreational puzzles because it is striking when a single set of pieces can construct highly different forms. Existing techniques for creating dissections find pieces that reconstruct two input shapes exactly. Unfortunately, these methods only support simple, abstract shapes because an excessive number of pieces may be needed to reconstruct more complex, naturalistic shapes. We introduce a dissection design technique that supports such shapes by requiring that the pieces reconstruct the shapes only approximately. We find that, in most cases, a small number of pieces suffices to tightly approximate the input shapes. We frame the search for a viable dissection as a combinatorial optimization problem, where the goal is to search for the best approximation to the input shapes using a given number of pieces. We find a lower bound on the tightness of the approximation for a partial dissection solution, which allows us to prune the search space and makes the problem tractable. We demonstrate our approach on several challenging examples, showing that it can create dissections between shapes of significantly greater complexity than those supported by previous techniques.",
"title": ""
},
{
"docid": "b8dfe30c07f0caf46b3fc59406dbf017",
"text": "We describe an extensible approach to generating questions for the purpose of reading comprehension assessment and practice. Our framework for question generation composes general-purpose rules to transform declarative sentences into questions, is modular in that existing NLP tools can be leveraged, and includes a statistical component for scoring questions based on features of the input, output, and transformations performed. In an evaluation in which humans rated questions according to several criteria, we found that our implementation achieves 43.3% precisionat-10 and generates approximately 6.8 acceptable questions per 250 words of source text.",
"title": ""
},
{
"docid": "72b15b373785198624438cdd7e187a79",
"text": "The technical debt metaphor is widely used to encapsulate numerous software quality problems. The metaphor is attractive to practitioners as it communicates to both technical and nontechnical audiences that if quality problems are not addressed, things may get worse. However, it is unclear whether there are practices that move this metaphor beyond a mere communication mechanism. Existing studies of technical debt have largely focused on code metrics and small surveys of developers. In this paper, we report on our survey of 1,831 participants, primarily software engineers and architects working in long-lived, software-intensive projects from three large organizations, and follow-up interviews of seven software engineers. We analyzed our data using both nonparametric statistics and qualitative text analysis. We found that architectural decisions are the most important source of technical debt. Furthermore, while respondents believe the metaphor is itself important for communication, existing tools are not currently helpful in managing the details. We use our results to motivate a technical debt timeline to focus management and tooling approaches.",
"title": ""
},
{
"docid": "e3104e5311dee57067540869f8036ba9",
"text": "Direct-touch interaction on mobile phones revolves around screens that compete for visual attention with users' real-world tasks and activities. This paper investigates the impact of these situational impairments on touch-screen interaction. We probe several design factors for touch-screen gestures, under various levels of environmental demands on attention, in comparison to the status-quo approach of soft buttons. We find that in the presence of environmental distractions, gestures can offer significant performance gains and reduced attentional load, while performing as well as soft buttons when the user's attention is focused on the phone. In fact, the speed and accuracy of bezel gestures did not appear to be significantly affected by environment, and some gestures could be articulated eyes-free, with one hand. Bezel-initiated gestures offered the fastest performance, and mark-based gestures were the most accurate. Bezel-initiated marks therefore may offer a promising approach for mobile touch-screen interaction that is less demanding of the user's attention.",
"title": ""
},
{
"docid": "fc6214a4b20dba903a1085bd1b6122e0",
"text": "a r t i c l e i n f o Keywords: CRM technology use Marketing capability Customer-centric organizational culture Customer-centric management system Customer relationship management (CRM) technology has attracted significant attention from researchers and practitioners as a facilitator of organizational performance. Even though companies have made tremendous investments in CRM technology, empirical research offers inconsistent support that CRM technology enhances organizational performance. Given this equivocal effect and the increasing need for the generalization of CRM implementation research outside western context, the authors, using data from Korean companies, address the process concerning how CRM technology translates into business outcomes. The results highlight that marketing capability mediates the association between CRM technology use and performance. Moreover, a customer-centric organizational culture and management system facilitate CRM technology use. This study serves not only to clarify the mechanism between CRM technology use and organizational performance, but also to generalize the CRM results in the Korean context. In today's competitive business environment, the success of firm increasingly hinges on the ability to operate customer relationship management (CRM) that enables the development and implementation of more efficient and effective customer-focused strategies. Based on this belief, many companies have made enormous investment in CRM technology as a means to actualize CRM efficiently. Despite conceptual underpinnings of CRM technology and substantial financial implications , empirical research examining the CRM technology-performance link has met with equivocal results. Recent studies demonstrate that only 30% of the organizations introducing CRM technology achieved improvements in their organizational performance (Bull, 2003; Corner and Hinton, 2002). These conflicting findings hint at the potential influences of unexplored mediating or moderating factors and the need of further research on the mechanism by which CRM technology leads to improved business performance. Such inconsistent results of CRM technology implementation are not limited to western countries which most of previous CRM research originated from. Even though Korean companies have poured tremendous resources to CRM initiatives since 2000, they also cut down investment in CRM technology drastically due to disappointing returns (Knowledge Research Group, 2004). As a result, Korean companies are increasingly eager to corroborate the returns from investment in CRM. In the eastern culture like Korea that promotes holistic thinking focusing on the relationships between a focal object and overall context (Monga and John, 2007), CRM operates as a two-edged sword. Because eastern culture with holistic thinking tends to value existing relationship with firms or contact point persons …",
"title": ""
}
] |
scidocsrr
|
8af4b36e563711a6adf5ba30a27802e2
|
Simulation optimization: a review of algorithms and applications
|
[
{
"docid": "24d43934d9becd24584b07afb511868b",
"text": "Tabu search is a \"higher level\" heuristic procedure for solving optimization problems, designed to guide other methods (or their component processes) to escape the trap of local optimality. Tabu search has obtained optimal and near optimal solutions to a wide variety of classical and practical problems in applications ranging from scheduling to telecommunications and from character recognition to neural networks. It uses flexible structures memory (to permit search information to be exploited more thoroughly than by rigid memory systems or memoryless systems), conditions for strategically constraining and freeing the search process (embodied in tabu restrictions and aspiration criteria), and memory functions of varying time spans for intensifying and diversifying the search (reinforcing attributes historically found good and driving the search into new regions). Tabu search can be integrated with branch-and-bound and cutting plane procedures, and it has the ability to start with a simple implementation that can be upgraded over time to incorporate more advanced or specialized elements.",
"title": ""
}
] |
[
{
"docid": "af628819a5392543266668b94c579a96",
"text": "Elephantopus scaber is an ethnomedicinal plant used by the Zhuang people in Southwest China to treat headaches, colds, diarrhea, hepatitis, and bronchitis. A new δ -truxinate derivative, ethyl, methyl 3,4,3',4'-tetrahydroxy- δ -truxinate (1), was isolated from the ethyl acetate extract of the entire plant, along with 4 known compounds. The antioxidant activity of these 5 compounds was determined by ABTS radical scavenging assay. Compound 1 was also tested for its cytotoxicity effect against HepG2 by MTT assay (IC50 = 60 μ M), and its potential anti-inflammatory, antibiotic, and antitumor bioactivities were predicted using target fishing method software.",
"title": ""
},
{
"docid": "88eaf07c8ef59bad1ea9f29f83050149",
"text": "A monocular 3D object tracking system generally has only up-to-scale pose estimation results without any prior knowledge of the tracked object. In this paper, we propose a novel idea to recover the metric scale of an arbitrary dynamic object by optimizing the trajectory of the objects in the world frame, without motion assumptions. By introducing an additional constraint in the time domain, our monocular visual-inertial tracking system can obtain continuous six degree of freedom (6-DoF) pose estimation without scale ambiguity. Our method requires neither fixed multi-camera nor depth sensor settings for scale observability, instead, the IMU inside the monocular sensing suite provides scale information for both camera itself and the tracked object. We build the proposed system on top of our monocular visual-inertial system (VINS) to obtain accurate state estimation of the monocular camera in the world frame. The whole system consists of a 2D object tracker, an object region-based visual bundle adjustment (BA), VINS and a correlation analysis-based metric scale estimator. Experimental comparisons with ground truth demonstrate the tracking accuracy of our 3D tracking performance while a mobile augmented reality (AR) demo shows the feasibility of potential applications.",
"title": ""
},
{
"docid": "640b6328fe2a44d56fa9d7d2bf61798d",
"text": "This paper describes our participation in SemEval-2015 Task 12, and the opinion mining system sentiue. The general idea is that systems must determine the polarity of the sentiment expressed about a certain aspect of a target entity. For slot 1, entity and attribute category detection, our system applies a supervised machine learning classifier, for each label, followed by a selection based on the probability of the entity/attribute pair, on that domain. The target expression detection, for slot 2, is achieved by using a catalog of known targets for each entity type, complemented with named entity recognition. In the opinion sentiment slot, we used a 3 class polarity classifier, having BoW, lemmas, bigrams after verbs, presence of polarized terms, and punctuation based features. Working in unconstrained mode, our results for slot 1 were assessed with precision between 57% and 63%, and recall varying between 42% and 47%. In sentiment polarity, sentiue’s result accuracy was approximately 79%, reaching the best score in 2 of the 3 domains.",
"title": ""
},
{
"docid": "3aa50177ceaca48ed107682e20eb598f",
"text": "This paper presents the concept and operating principles of a flexible real-time long-term monitoring system for photovoltaic (PV) plants. Compared to traditional solutions which require dedicated hardware and/or specific data logging systems, the monitoring system we propose allows the user to monitor the grid-connected PV system using commercial of the shelf hardware devices and software programs such as LabVIEW and Weather Link software. The proposed system is built around wired/wireless devices and internet of things (IoT) concept. It provides customizable fast, reliable and secure monitoring tool suitable for deployment in PV systems management. The grid-connected PV monitoring system (GCPV-MS) is developed and installed at the University of Huddersfield, United Kingdom. The results obtained from this project indicate how IoT concept can be utilized in remote PV monitoring systems.",
"title": ""
},
{
"docid": "89c76729f6d1e53b35ecce548c5955af",
"text": "A high fidelity biomimetic hand actuated by 9 stepper motors packaged within forearm casing was manufactured for less than 350 USD; it has 18 mechanical degrees of freedom, is 38 cm long, weighs 2.2 kg. The hand model has 3D printed replicas of human bones and laser cut tendons and ligaments. The user intent is deduced from EEG and EMG signals, obtained by Neurosky and Myoware commercial sensors, respectively. Three distinct EEG patterns trigger pinch, hook, and point actions. EMG signals are used for finer motor control, e.g. strength of grip. A pilot test study on three subjects showed that EMG can actuate the hand with an 80% success rate, while EEG allows for a 68% success rate. The system proved its robustness at the 2017 Cambridge Science Festival, using EEG signals alone. Out of approximately 30 visitors the majority could generate a “peace” sign after 1 to 2 minutes.",
"title": ""
},
{
"docid": "fbb5a86992438d630585462f8626e13f",
"text": "As a basic task in computer vision, semantic segmentation can provide fundamental information for object detection and instance segmentation to help the artificial intelligence better understand real world. Since the proposal of fully convolutional neural network (FCNN), it has been widely used in semantic segmentation because of its high accuracy of pixel-wise classification as well as high precision of localization. In this paper, we apply several famous FCNN to brain tumor segmentation, making comparisons and adjusting network architectures to achieve better performance measured by metrics such as precision, recall, mean of intersection of union (mIoU) and dice score coefficient (DSC). The adjustments to the classic FCNN include adding more connections between convolutional layers, enlarging decoders after up sample layers and changing the way shallower layers’ information is reused. Besides the structure modification, we also propose a new classifier with a hierarchical dice loss. Inspired by the containing relationship between classes, the loss function converts multiple classification to multiple binary classification in order to counteract the negative effect caused by imbalance data set. Massive experiments have been done on the training set and testing set in order to assess our refined fully convolutional neural networks and new types of loss function. Competitive figures prove they are more effective than their predecessors.",
"title": ""
},
{
"docid": "e0632c0bb393eb567f8bcc21468742b2",
"text": "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.",
"title": ""
},
{
"docid": "80a61f27dab6a8f71a5c27437254778b",
"text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.",
"title": ""
},
{
"docid": "5ab17c802a11a7b7fb9d3190a7dbfa7b",
"text": "A CMOS active diode rectifier for wireless power transmission with proposed voltage-time-conversion (VTC) delay-locked loop (DLL) control suppresses reverse current by realizing zero-voltage switching (ZVS), regardless of AC input and process variations. The proposed circuit is implemented in a standard 0.18μm CMOS process using I/O MOSFETs, which corresponds to 0.35μm technology. The maximum power conversion efficiency of 78% is obtained at 231Ω load resistance.",
"title": ""
},
{
"docid": "f8def1217137641547921e3f52c0b4ae",
"text": "A 50-GHz charge pump phase-locked loop (PLL) utilizing an LC-oscillator-based injection-locked frequency divider (ILFD) was fabricated in 0.13-mum logic CMOS process. The PLL can be locked from 45.9 to 50.5 GHz and output power level is around -10 dBm. The operating frequency range is increased by tracking the self-oscillation frequencies of the voltage-controlled oscillator (VCO) and the frequency divider. The PLL including buffers consumes 57 mW from 1.5/0.8-V supplies. The phase noise at 50 kHz, 1 MHz, and 10 MHz offset from the carrier is -63.5, -72, and -99 dBc/Hz, respectively. The PLL also outputs second-order harmonics at frequencies between 91.8 and 101 GHz. The output frequency of 101 GHz is the highest for signals locked by a PLL fabricated using the silicon integrated circuits technology.",
"title": ""
},
{
"docid": "7ab232fbbda235c42e0dabb2b128ed59",
"text": "Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.",
"title": ""
},
{
"docid": "d91106191b1c437361e9eda93b68e902",
"text": "Although the history of thought reveals a number of holistic thinkers — Aristotle, Marx, Husserl among them— it was only in the 1950s that any version of holistic thinking became institutionalized. The kind of holistic thinking which then came to the fore, and was the concern of a newly created organization, was that which makes explicit use of the concept of ‘system’, and today it is ‘systems thinking’ in its various forms which would be taken to be the very paradigm of thinking holistically. In 1954, as recounted in Chapter 3 of Systems Thinking, Systems Practice, only one kind of systems thinking was on the table: the development of a mathematically expressed general theory of systems. It was supposed that this would provide a meta-level language and theory in which the problems of many different disciplines could be expressed and solved; and it was hoped that doing this would help to promote the unity of science. These were the aspirations of the pioneers, but looking back from 1999we can see that the project has not succeeded. The literature contains very little of the kind of outcomes anticipated by the founders of the Society for General Systems Research; and scholars in the many subject areas towhich a holistic approach is relevant have been understandably reluctant to see their pet subject as simply one more example of some broader ‘general system’!",
"title": ""
},
{
"docid": "88d00a5be341f523ecc2898e7dea26f3",
"text": "Spoken dialog systems help users achieve a task using natural language. Noisy speech recognition and ambiguity in natural language motivate statistical approaches that model distributions over the user’s goal at every step in the dialog. The task of tracking these distributions, termed Dialog State Tracking, is therefore an essential component of any spoken dialog system. In recent years, the Dialog State Tracking Challenges have provided a common testbed and evaluation framework for this task, as well as labeled dialog data. As a result, a variety of machine-learned methods have been successfully applied to Dialog State Tracking. This paper reviews the machine-learning techniques that have been adapted to Dialog State Tracking, and gives an overview of published evaluations. Discriminative machine-learned methods outperform generative and rule-based methods, the previous state-of-the-art.",
"title": ""
},
{
"docid": "210e9bc5f2312ca49438e6209ecac62e",
"text": "Image classification has become one of the main tasks in the field of computer vision technologies. In this context, a recent algorithm called CapsNet that implements an approach based on activity vectors and dynamic routing between capsules may overcome some of the limitations of the current state of the art artificial neural networks (ANN) classifiers, such as convolutional neural networks (CNN). In this paper, we evaluated the performance of the CapsNet algorithm in comparison with three well-known classifiers (Fisherfaces, LeNet, and ResNet). We tested the classification accuracy on four datasets with a different number of instances and classes, including images of faces, traffic signs, and everyday objects. The evaluation results show that even for simple architectures, training the CapsNet algorithm requires significant computational resources and its classification performance falls below the average accuracy values of the other three classifiers. However, we argue that CapsNet seems to be a promising new technique for image classification, and further experiments using more robust computation resources and refined CapsNet architectures may produce better outcomes.",
"title": ""
},
{
"docid": "a2cbec8144197125cc5530aa6755196f",
"text": "This paper provides a survey of the research done on optimization in dynamic environments over the past decade. We show an analysis of the most commonly used problems, methods and measures together with the newer approaches and trends, as well as their interrelations and common ideas. The survey is supported by a public web repository, located at http://www.dynamic-optimization. org where the collected bibliography is manually organized and tagged according to different categories.",
"title": ""
},
{
"docid": "574b01b68f72474d4916dc904f0040c8",
"text": "Recurrent Neural Networks (RNNs) are powerful models that a chieve exceptional performance on several pattern recognition problem s. However, the training of RNNs is a computationally difficult task owing to the wellknown “vanishing/exploding” gradient problem. Algorithms proposed for training RNNs either exploit no (or limited) curvature information and have chea p per-iteration complexity, or attempt to gain significant curvature informati on at the cost of increased per-iteration cost. The former set includes diagonally-sc aled first-order methods such as ADAGRAD and ADAM , while the latter consists of second-order algorithms like Hessian-Free Newton and K-FAC. In this paper, we presentADAQN, a stochastic quasi-Newton algorithm for training RNNs. Our approach retains a low per-iteration cost while allowing for non-diagonal sca ling through a stochastic L-BFGS updating scheme. The method uses a novel L-BFGS scali ng initialization scheme and is judicious in storing and retaining L-BFGS curvature pairs. We present numerical experiments on two language modeling tas ks nd show that ADAQN is competitive with popular RNN training algorithms.",
"title": ""
},
{
"docid": "53cf85922865609c4a7591bd06679660",
"text": "Speeded visual word naming and lexical decision performance are reported for 2428 words for young adults and healthy older adults. Hierarchical regression techniques were used to investigate the unique predictive variance of phonological features in the onsets, lexical variables (e.g., measures of consistency, frequency, familiarity, neighborhood size, and length), and semantic variables (e.g. imageahility and semantic connectivity). The influence of most variables was highly task dependent, with the results shedding light on recent empirical controversies in the available word recognition literature. Semantic-level variables accounted for unique variance in both speeded naming and lexical decision performance, level with the latter task producing the largest semantic-level effects. Discussion focuses on the utility of large-scale regression studies in providing a complementary approach to the standard factorial designs to investigate visual word recognition.",
"title": ""
},
{
"docid": "f83ca1c2732011e9a661f8cf9a0516ac",
"text": "We provide a characterization of pseudoentropy in terms of hardness of sampling: Let (X,B) be jointly distributed random variables such that B takes values in a polynomial-sized set. We show that B is computationally indistinguishable from a random variable of higher Shannon entropy given X if and only if there is no probabilistic polynomial-time S such that (X,S(X)) has small KL divergence from (X,B). This can be viewed as an analogue of the Impagliazzo Hardcore Theorem (FOCS '95) for Shannon entropy (rather than min-entropy).\n Using this characterization, we show that if f is a one-way function, then (f(Un),Un) has \"next-bit pseudoentropy\" at least n+log n, establishing a conjecture of Haitner, Reingold, and Vadhan (STOC '10). Plugging this into the construction of Haitner et al., this yields a simpler construction of pseudorandom generators from one-way functions. In particular, the construction only performs hashing once, and only needs the hash functions that are randomness extractors (e.g. universal hash functions) rather than needing them to support \"local list-decoding\" (as in the Goldreich--Levin hardcore predicate, STOC '89).\n With an additional idea, we also show how to improve the seed length of the pseudorandom generator to ~{O}(n3), compared to O(n4) in the construction of Haitner et al.",
"title": ""
},
{
"docid": "23ba216f846eab3ff8c394ad29b507bf",
"text": "The emergence of large-scale freeform shapes in architecture poses big challenges to the fabrication of such structures. A key problem is the approximation of the design surface by a union of patches, so-called panels, that can be manufactured with a selected technology at reasonable cost, while meeting the design intent and achieving the desired aesthetic quality of panel layout and surface smoothness. The production of curved panels is mostly based on molds. Since the cost of mold fabrication often dominates the panel cost, there is strong incentive to use the same mold for multiple panels. We cast the major practical requirements for architectural surface paneling, including mold reuse, into a global optimization framework that interleaves discrete and continuous optimization steps to minimize production cost while meeting user-specified quality constraints. The search space for optimization is mainly generated through controlled deviation from the design surface and tolerances on positional and normal continuity between neighboring panels. A novel 6-dimensional metric space allows us to quickly compute approximate inter-panel distances, which dramatically improves the performance of the optimization and enables the handling of complex arrangements with thousands of panels. The practical relevance of our system is demonstrated by paneling solutions for real, cutting-edge architectural freeform design projects.",
"title": ""
},
{
"docid": "b836df8acd489acae10dbd8d58f6a8b3",
"text": "This paper presents a benchmark dataset for the task of inter-sentence relation extraction. The paper explains the distant supervision method followed for creating the dataset for inter-sentence relation extraction, involving relations previously used for standard intrasentence relation extraction task. The study evaluates baseline models such as bag-of-words and sequence based recurrent neural network models on the developed dataset and shows that recurrent neural network models are more useful for the task of intra-sentence relation extraction. Comparing the results of the present work on iner-sentence relation extraction with previous work on intra-sentence relation extraction, the study suggests the need for more sophisticated models to handle long-range information between entities across sentences.",
"title": ""
}
] |
scidocsrr
|
27868cdcf9701d4e128362e20b2f1dd8
|
Student Performance Prediction via Online Learning Behavior Analytics
|
[
{
"docid": "d3b6ba3e4b8e80c3c371226d7ae6d610",
"text": "Interest in collecting and mining large sets of educational data on student background and performance to conduct research on learning and instruction has developed as an area generally referred to as learning analytics. Higher education leaders are recognizing the value of learning analytics for improving not only learning and teaching but also the entire educational arena. However, theoretical concepts and empirical evidence need to be generated within the fast evolving field of learning analytics. The purpose of the two reported cases studies is to identify alternative approaches to data analysis and to determine the validity and accuracy of a learning analytics framework and its corresponding student and learning profiles. The findings indicate that educational data for learning analytics is context specific and variables carry different meanings and can have different implications across educational institutions and area of studies. Benefits, concerns, and challenges of learning analytics are critically reflected, indicating that learning analytics frameworks need to be sensitive to idiosyncrasies of the educational institution and its stakeholders.",
"title": ""
}
] |
[
{
"docid": "6226fddb004d4e8d41b1167f61d3fcd7",
"text": "We build a neural conversation system using a deep LST Seq2Seq model with an attention mechanism applied on the decoder. We further improve our system by introducing beam search and re-ranking with a Mutual Information objective function method to search for relevant and coherent responses. We find that both models achieve reasonable results after being trained on a domain-specific dataset and are able to pick up contextual information specific to the dataset. The second model, in particular, has promise with addressing the ”I don’t know” problem and de-prioritizing over-generic responses.",
"title": ""
},
{
"docid": "54537c242bc89fbf15d9191be80c5073",
"text": "In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals. We study this problem in a relational setting and make the following contributions. First, we compare two different notions of relational marginals. Second, we show a duality between the resulting relational marginal problems and the maximum likelihood estimation of the parameters of relational models, which generalizes a well-known duality from the propositional setting. Third, by exploiting the relational marginal formulation, we present a statistically sound method to learn the parameters of relational models that will be applied in settings where the number of constants differs between the training and test data. Furthermore, based on a relational generalization of marginal polytopes, we characterize cases where the standard estimators based on feature’s number of true groundings needs to be adjusted and we quantitatively characterize the consequences of these adjustments. Fourth, we prove bounds on expected errors of the estimated parameters, which allows us to lower-bound, among other things, the effective sample size of relational training data.",
"title": ""
},
{
"docid": "088df7d8d71c00f7129d5249844edbc5",
"text": "Intense multidisciplinary research has provided detailed knowledge of the molecular pathogenesis of Alzheimer disease (AD). This knowledge has been translated into new therapeutic strategies with putative disease-modifying effects. Several of the most promising approaches, such as amyloid-β immunotherapy and secretase inhibition, are now being tested in clinical trials. Disease-modifying treatments might be at their most effective when initiated very early in the course of AD, before amyloid plaques and neurodegeneration become too widespread. Thus, biomarkers are needed that can detect AD in the predementia phase or, ideally, in presymptomatic individuals. In this Review, we present the rationales behind and the diagnostic performances of the core cerebrospinal fluid (CSF) biomarkers for AD, namely total tau, phosphorylated tau and the 42 amino acid form of amyloid-β. These biomarkers reflect AD pathology, and are candidate markers for predicting future cognitive decline in healthy individuals and the progression to dementia in patients who are cognitively impaired. We also discuss emerging plasma and CSF biomarkers, and explore new proteomics-based strategies for identifying additional CSF markers. Furthermore, we outline the roles of CSF biomarkers in drug discovery and clinical trials, and provide perspectives on AD biomarker discovery and the validation of such markers for use in the clinic.",
"title": ""
},
{
"docid": "982af44d0c5fc3d0bddd2804cee77a04",
"text": "Coprime array offers a larger array aperture than uniform linear array with the same number of physical sensors, and has a better spatial resolution with increased degrees of freedom. However, when it comes to the problem of adaptive beamforming, the existing adaptive beamforming algorithms designed for the general array cannot take full advantage of coprime feature offered by the coprime array. In this paper, we propose a novel coprime array adaptive beamforming algorithm, where both robustness and efficiency are well balanced. Specifically, we first decompose the coprime array into a pair of sparse uniform linear subarrays and process their received signals separately. According to the property of coprime integers, the direction-of-arrival (DOA) can be uniquely estimated for each source by matching the super-resolution spatial spectra of the pair of sparse uniform linear subarrays. Further, a joint covariance matrix optimization problem is formulated to estimate the power of each source. The estimated DOAs and their corresponding power are utilized to reconstruct the interference-plus-noise covariance matrix and estimate the signal steering vector. Theoretical analyses are presented in terms of robustness and efficiency, and simulation results demonstrate the effectiveness of the proposed coprime array adaptive beamforming algorithm.",
"title": ""
},
{
"docid": "1ba6f0efdac239fa2cb32064bb743d29",
"text": "This paper presents a new method for determining efficient spatial distributions of police patrol areas. This method employs a traditional maximal covering formulation and an innovative backup covering formulation to provide alternative optimal solutions to police decision makers, and to address the lack of objective quantitative methods for police area design in the literature or in practice. This research demonstrates that operations research methods can be used in police decision making, presents a new backup coverage model that is appropriate for patrol area design, and encourages the integration of geographic information systems and optimal solution procedures. The models and methods are tested with the police geography of Dallas, TX. The optimal solutions are compared with the existing police geography, showing substantial improvement in number of incidents covered as well as total distance traveled.",
"title": ""
},
{
"docid": "26f957036ead7173f93ec16a57097a50",
"text": "The purpose of this paper is to present a direct digital manufacturing (DDM) process that is an order of magnitude faster than other DDM processes currently available. The developed process is based on a mask-image-projection-based Stereolithography process (MIP-SL), during which a Digital Micromirror Device (DMD) controlled projection light cures and cross-links liquid photopolymer resin. In order to achieve high-speed fabrication, we investigated the bottom-up projection system in the MIP-SL process. A set of techniques including film coating and the combination of two-way linear motions have been developed for the quick spreading of liquid resin into uniform thin layers. The process parameters and related settings to achieve the fabrication speed of a few seconds per layer are presented. Additionally, the hardware, software, and material setups developed for fabricating given three-dimensional (3D) digital models are presented. Experimental studies using the developed testbed have been performed to verify the effectiveness and efficiency of the presented fast MIP-SL process. The test results illustrate that the newly developed process can build a moderately sized part within minutes instead of hours that are typically required.",
"title": ""
},
{
"docid": "7c525afc11c41e0a8ca6e8c48bdec97c",
"text": "AT commands, originally designed in the early 80s for controlling modems, are still in use in most modern smartphones to support telephony functions. The role of AT commands in these devices has vastly expanded through vendor-specific customizations, yet the extent of their functionality is unclear and poorly documented. In this paper, we systematically retrieve and extract 3,500 AT commands from over 2,000 Android smartphone firmware images across 11 vendors. We methodically test our corpus of AT commands against eight Android devices from four different vendors through their USB interface and characterize the powerful functionality exposed, including the ability to rewrite device firmware, bypass Android security mechanisms, exfiltrate sensitive device information, perform screen unlocks, and inject touch events solely through the use of AT commands. We demonstrate that the AT command interface contains an alarming amount of unconstrained functionality and represents a broad attack surface on Android devices.",
"title": ""
},
{
"docid": "ac078f78fcf0f675c21a337f8e3b6f5f",
"text": "bstract. Plenoptic cameras, constructed with internal microlens rrays, capture both spatial and angular information, i.e., the full 4-D adiance, of a scene. The design of traditional plenoptic cameras ssumes that each microlens image is completely defocused with espect to the image created by the main camera lens. As a result, nly a single pixel in the final image is rendered from each microlens mage, resulting in disappointingly low resolution. A recently develped alternative approach based on the focused plenoptic camera ses the microlens array as an imaging system focused on the imge plane of the main camera lens. The flexible spatioangular tradeff that becomes available with this design enables rendering of final mages with significantly higher resolution than those from traditional lenoptic cameras. We analyze the focused plenoptic camera in ptical phase space and present basic, blended, and depth-based endering algorithms for producing high-quality, high-resolution imges. We also present our graphics-processing-unit-based impleentations of these algorithms, which are able to render full screen efocused images in real time. © 2010 SPIE and IS&T. DOI: 10.1117/1.3442712",
"title": ""
},
{
"docid": "3a1f8a6934e45b50cbd691b5d28036b1",
"text": "Navigating complex routes and finding objects of interest are challenging tasks for the visually impaired. The project NAVIG (Navigation Assisted by artificial VIsion and GNSS) is directed toward increasing personal autonomy via a virtual augmented reality system. The system integrates an adapted geographic information system with different classes of objects useful for improving route selection and guidance. The database also includes models of important geolocated objects that may be detected by real-time embedded vision algorithms. Object localization (relative to the user) may serve both global positioning and sensorimotor actions such as heading, grasping, or piloting. The user is guided to his desired destination through spatialized semantic audio rendering, always maintained in the head-centered reference frame. This paper presents the overall project design and architecture of the NAVIG system. In addition, details of a new type of detection and localization device are presented. This approach combines a bio-inspired vision system that can recognize and locate objects very quickly and a 3D sound rendering system that is able to perceptually position a sound at the location of the recognized object. This system was developed in relation to guidance directives developed through participative design with potential users and educators for the visually impaired.",
"title": ""
},
{
"docid": "38c1f6741d99ffc8ab2ab17b5b91e477",
"text": "This paper reviews recent advances in radar sensor design for low-power healthcare, indoor real-time positioning and other applications of IoT. Various radar front-end architectures and digital processing methods are proposed to improve the detection performance including detection accuracy, detection range and power consumption. While many of the reported designs were prototypes for concept verification, several integrated radar systems have been demonstrated with reliable measured results with demo systems. A performance comparison of latest radar chip designs has been provided to show their features of different architectures. With great development of IoT, short-range low-power radar sensors for healthcare and indoor positioning applications will attract more and more research interests in the near future.",
"title": ""
},
{
"docid": "88ffb30f1506bedaf7c1a3f43aca439e",
"text": "The multiprotein mTORC1 protein kinase complex is the central component of a pathway that promotes growth in response to insulin, energy levels, and amino acids and is deregulated in common cancers. We find that the Rag proteins--a family of four related small guanosine triphosphatases (GTPases)--interact with mTORC1 in an amino acid-sensitive manner and are necessary for the activation of the mTORC1 pathway by amino acids. A Rag mutant that is constitutively bound to guanosine triphosphate interacted strongly with mTORC1, and its expression within cells made the mTORC1 pathway resistant to amino acid deprivation. Conversely, expression of a guanosine diphosphate-bound Rag mutant prevented stimulation of mTORC1 by amino acids. The Rag proteins do not directly stimulate the kinase activity of mTORC1, but, like amino acids, promote the intracellular localization of mTOR to a compartment that also contains its activator Rheb.",
"title": ""
},
{
"docid": "c7631e1df773574e3640062c5fd55a01",
"text": "A cloud storage system, consisting of a collection of storage servers, provides long-term storage services over the Internet. Storing data in a third party's cloud system causes serious concern over data confidentiality. General encryption schemes protect data confidentiality, but also limit the functionality of the storage system because a few operations are supported over encrypted data. Constructing a secure storage system that supports multiple functions is challenging when the storage system is distributed and has no central authority. We propose a threshold proxy re-encryption scheme and integrate it with a decentralized erasure code such that a secure distributed storage system is formulated. The distributed storage system not only supports secure and robust data storage and retrieval, but also lets a user forward his data in the storage servers to another user without retrieving the data back. The main technical contribution is that the proxy re-encryption scheme supports encoding operations over encrypted messages as well as forwarding operations over encoded and encrypted messages. Our method fully integrates encrypting, encoding, and forwarding. We analyze and suggest suitable parameters for the number of copies of a message dispatched to storage servers and the number of storage servers queried by a key server. These parameters allow more flexible adjustment between the number of storage servers and robustness.",
"title": ""
},
{
"docid": "397f6c39825a5d8d256e0cc2fbba5d15",
"text": "This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skeletal size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.",
"title": ""
},
{
"docid": "f291c66ebaa6b24d858103b59de792b7",
"text": "In this study, the authors investigated the hypothesis that women's sexual orientation and sexual responses in the laboratory correlate less highly than do men's because women respond primarily to the sexual activities performed by actors, whereas men respond primarily to the gender of the actors. The participants were 20 homosexual women, 27 heterosexual women, 17 homosexual men, and 27 heterosexual men. The videotaped stimuli included men and women engaging in same-sex intercourse, solitary masturbation, or nude exercise (no sexual activity); human male-female copulation; and animal (bonobo chimpanzee or Pan paniscus) copulation. Genital and subjective sexual arousal were continuously recorded. The genital responses of both sexes were weakest to nude exercise and strongest to intercourse. As predicted, however, actor gender was more important for men than for women, and the level of sexual activity was more important for women than for men. Consistent with this result, women responded genitally to bonobo copulation, whereas men did not. An unexpected result was that homosexual women responded more to nude female targets exercising and masturbating than to nude male targets, whereas heterosexual women responded about the same to both sexes at each activity level.",
"title": ""
},
{
"docid": "d04042c81f2c2f7f762025e6b2bd9ab8",
"text": "AIMS AND OBJECTIVES\nTo examine the association between trait emotional intelligence and learning strategies and their influence on academic performance among first-year accelerated nursing students.\n\n\nDESIGN\nThe study used a prospective survey design.\n\n\nMETHODS\nA sample size of 81 students (100% response rate) who undertook the accelerated nursing course at a large university in Sydney participated in the study. Emotional intelligence was measured using the adapted version of the 144-item Trait Emotional Intelligence Questionnaire. Four subscales of the Motivated Strategies for Learning Questionnaire were used to measure extrinsic goal motivation, peer learning, help seeking and critical thinking among the students. The grade point average score obtained at the end of six months was used to measure academic achievement.\n\n\nRESULTS\nThe results demonstrated a statistically significant correlation between emotional intelligence scores and critical thinking (r = 0.41; p < 0.001), help seeking (r = 0.33; p < 0.003) and peer learning (r = 0.32; p < 0.004) but not with extrinsic goal orientation (r = -0.05; p < 0.677). Emotional intelligence emerged as a significant predictor of academic achievement (β = 0.25; p = 0.023).\n\n\nCONCLUSION\nIn addition to their learning styles, higher levels of awareness and understanding of their own emotions have a positive impact on students' academic achievement. Higher emotional intelligence may lead students to pursue their interests more vigorously and think more expansively about subjects of interest, which could be an explanatory factor for higher academic performance in this group of nursing students.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThe concepts of emotional intelligence are central to clinical practice as nurses need to know how to deal with their own emotions as well as provide emotional support to patients and their families. It is therefore essential that these skills are developed among student nurses to enhance the quality of their clinical practice.",
"title": ""
},
{
"docid": "d15ce9f62f88a07db6fa427fae61f26c",
"text": "This paper introduced a detail ElGamal digital signature scheme, and mainly analyzed the existing problems of the ElGamal digital signature scheme. Then improved the scheme according to the existing problems of ElGamal digital signature scheme, and proposed an implicit ElGamal type digital signature scheme with the function of message recovery. As for the problem that message recovery not being allowed by ElGamal signature scheme, this article approached a method to recover message. This method will make ElGamal signature scheme have the function of message recovery. On this basis, against that part of signature was used on most attacks for ElGamal signature scheme, a new implicit signature scheme with the function of message recovery was formed, after having tried to hid part of signature message and refining forthcoming implicit type signature scheme. The safety of the refined scheme was anlyzed, and its results indicated that the new scheme was better than the old one.",
"title": ""
},
{
"docid": "9d2583618e9e00333d044ac53da65ceb",
"text": "The phosphor deposits of the β-sialon:Eu2+ mixed with various amounts (0-1 g) of the SnO₂ nanoparticles were fabricated by the electrophoretic deposition (EPD) process. The mixed SnO₂ nanoparticles was observed to cover onto the particle surfaces of the β-sialon:Eu2+ as well as fill in the voids among the phosphor particles. The external and internal quantum efficiencies (QEs) of the prepared deposits were found to be dependent on the mixing amount of the SnO₂: by comparing with the deposit without any mixing (48% internal and 38% external QEs), after mixing the SnO₂ nanoparticles, the both QEs were improved to 55% internal and 43% external QEs at small mixing amount (0.05 g); whereas, with increasing the mixing amount to 0.1 and 1 g, they were reduced to 36% and 29% for the 0.1 g addition and 15% and 12% l QEs for the 1 g addition. More interestingly, tunable color appearances of the deposits prepared by the EPD process were achieved, from yellow green to blue, by varying the addition amount of the SnO₂, enabling it as an alternative technique instead of altering the voltage and depositing time for the color appearance controllability.",
"title": ""
},
{
"docid": "de7b16961bb4aa2001a3d0859f68e4c6",
"text": "A new practical method is given for the self-calibration of a camera. In this method, at least three images are taken from the same point in space with different orientations of the camera and calibration is computed from an analysis of point matches between the images. The method requires no knowledge of the orientations of the camera. Calibration is based on the image correspondences only. This method differs fundamentally from previous results by Maybank and Faugeras on selfcalibration using the epipolar structure of image pairs. In the method of this paper, there is no epipolar structure since all images are taken from the same point in space. Since the images are all taken from the same point in space, determination of point matches is considerably easier than for images taken with a moving camera, since problems of occlusion or change of aspect or illumination do not occur. The calibration method is evaluated on several sets of synthetic and real image data.",
"title": ""
},
{
"docid": "c70e2174bc25577ccac51912be9d7233",
"text": "In this paper, the bridge shape of interior permanent magnet synchronous motor (IPMSM) is designed for integrated starter and generator (ISG) which is applied in hybrid electric vehicle (HEV). Mechanical stress of rotor core which is caused by centrifugal force is the main issue when IPMSM is operated at high speed. The bridge is thin area in rotor core where is mechanically weak point and the shape of bridge significantly affects leakage flux and electromagnetic performance. Therefore, bridge should be designed considering both mechanic and electromagnetic characteristics. In the design process, we firstly find a shape of bridge has low leakage flux and mechanical stress. Next, the calculation of mechanical stress and the electromagnetic characteristics are performed by finite element analysis (FEA). The mechanical stress in rotor core is not maximized in steady high speed but dynamical high momentum. Therefore, transient FEA is necessary to consider the dynamic speed changing in real speed profile for durability experiment. Before the verification test, fatigue characteristic is investigated by using S-N curve of rotor core material. Lastly, the burst test of rotor is performed and the deformation of rotor core is compared between prototype and designed model to verify the design method.",
"title": ""
},
{
"docid": "22c749b089f0bdd1a3296f59fa9cdfc5",
"text": "Inspection of printed circuit board (PCB) has been a crucial process in the electronic manufacturing industry to guarantee product quality & reliability, cut manufacturing cost and to increase production. The PCB inspection involves detection of defects in the PCB and classification of those defects in order to identify the roots of defects. In this paper, all 14 types of defects are detected and are classified in all possible classes using referential inspection approach. The proposed algorithm is mainly divided into five stages: Image registration, Pre-processing, Image segmentation, Defect detection and Defect classification. The algorithm is able to perform inspection even when captured test image is rotated, scaled and translated with respect to template image which makes the algorithm rotation, scale and translation in-variant. The novelty of the algorithm lies in its robustness to analyze a defect in its different possible appearance and severity. In addition to this, algorithm takes only 2.528 s to inspect a PCB image. The efficacy of the proposed algorithm is verified by conducting experiments on the different PCB images and it shows that the proposed afgorithm is suitable for automatic visual inspection of PCBs.",
"title": ""
}
] |
scidocsrr
|
e26691763ff4bc685f34d288d09a8332
|
Light it up: using paper circuitry to enhance low-fidelity paper prototypes for children
|
[
{
"docid": "f641e0da7b9aaffe0fabd1a6b60a6c52",
"text": "This paper introduces a low cost, fast and accessible technology to support the rapid prototyping of functional electronic devices. Central to this approach of 'instant inkjet circuits' is the ability to print highly conductive traces and patterns onto flexible substrates such as paper and plastic films cheaply and quickly. In addition to providing an alternative to breadboarding and conventional printed circuits, we demonstrate how this technique readily supports large area sensors and high frequency applications such as antennas. Unlike existing methods for printing conductive patterns, conductivity emerges within a few seconds without the need for special equipment. We demonstrate that this technique is feasible using commodity inkjet printers and commercially available ink, for an initial investment of around US$300. Having presented this exciting new technology, we explain the tools and techniques we have found useful for the first time. Our main research contribution is to characterize the performance of instant inkjet circuits and illustrate a range of possibilities that are enabled by way of several example applications which we have built. We believe that this technology will be of immediate appeal to researchers in the ubiquitous computing domain, since it supports the fabrication of a variety of functional electronic device prototypes.",
"title": ""
},
{
"docid": "7efc1612114cde04a70733ce9e851ba9",
"text": "Low-fidelity paper prototyping has proven to be a useful technique for designing graphical user interfaces [1]. Wizard of Oz prototyping for other input modalities, such as speech, also has a long history [2]. Yet to surface are guidelines for low-fidelity prototyping of multimodal applications, those that use multiple and sometimes simultaneous combination of different input types. This paper describes our recent research in low fidelity, multimodal, paper prototyping and suggest guidelines to be used by future designers of multimodal applications.",
"title": ""
}
] |
[
{
"docid": "2a77d3750d35fd9fec52514739303812",
"text": "We present a framework for analyzing and computing motion plans for a robot that operates in an environment that both varies over time and is not completely predictable. We rst classify sources of uncertainty in motion planning into four categories, and argue that the problems addressed in this paper belong to a fundamental category that has received little attention. We treat the changing environment in a exible manner by combining traditional connguration space concepts with a Markov process that models the environment. For this context, we then propose the use of a motion strategy, which provides a motion command for the robot for each contingency that it could be confronted with. We allow the speciication of a desired performance criterion, such as time or distance, and determine a motion strategy that is optimal with respect to that criterion. We demonstrate the breadth of our framework by applying it to a variety of motion planning problems. Examples are computed for problems that involve a changing conng-uration space, hazardous regions and shelters, and processing of random service requests. To achieve this, we have exploited the powerful principle of optimality, which leads to a dynamic programming-based algorithm for determining optimal strategies. In addition, we present several extensions to the basic framework that incorporate additional concerns, such as sensing issues or changes in the geometry of the robot.",
"title": ""
},
{
"docid": "b0e81e112b9aa7ebf653243f00b21f23",
"text": "Recent research indicates that toddlers and infants succeed at various non-verbal spontaneous-response false-belief tasks; here we asked whether toddlers would also succeed at verbal spontaneous-response false-belief tasks that imposed significant linguistic demands. We tested 2.5-year-olds using two novel tasks: a preferential-looking task in which children listened to a false-belief story while looking at a picture book (with matching and non-matching pictures), and a violation-of-expectation task in which children watched an adult 'Subject' answer (correctly or incorrectly) a standard false-belief question. Positive results were obtained with both tasks, despite their linguistic demands. These results (1) support the distinction between spontaneous- and elicited-response tasks by showing that toddlers succeed at verbal false-belief tasks that do not require them to answer direct questions about agents' false beliefs, (2) reinforce claims of robust continuity in early false-belief understanding as assessed by spontaneous-response tasks, and (3) provide researchers with new experimental tasks for exploring early false-belief understanding in neurotypical and autistic populations.",
"title": ""
},
{
"docid": "cc5f1304bb7564ec990cf61ada5c1c0f",
"text": "In the present study, the herbal preparation of Ophthacare brand eye drops was investigated for its anti-inflammatory, antioxidant and antimicrobial activity, using in vivo and in vitro experimental models. Ophthacare brand eye drops exhibited significant anti-inflammatory activity in turpentine liniment-induced ocular inflammation in rabbits. The preparation dose-dependently inhibited ferric chloride-induced lipid peroxidation in vitro and also showed significant antibacterial activity against Escherichia coli and Staphylococcus aureus and antifungal activity against Candida albicans. All these findings suggest that Ophthacare brand eye drops can be used in the treatment of various ophthalmic disorders.",
"title": ""
},
{
"docid": "da17a995148ffcb4e219bb3f56f5ce4a",
"text": "As education communities grow more interested in STEM (science, technology, engineering, and mathematics), schools have integrated more technology and engineering opportunities into their curricula. Makerspaces for all ages have emerged as a way to support STEM learning through creativity, community building, and hands-on learning. However, little research has evaluated the learning that happens in these spaces, especially in young children. One framework that has been used successfully as an evaluative tool in informal and technology-rich learning spaces is Positive Technological Development (PTD). PTD is an educational framework that describes positive behaviors children exhibit while engaging in digital learning experiences. In this exploratory case study, researchers observed children in a makerspace to determine whether the environment (the space and teachers) contributed to children’s Positive Technological Development. N = 20 children and teachers from a Kindergarten classroom were observed over 6 hours as they engaged in makerspace activities. The children’s activity, teacher’s facilitation, and the physical space were evaluated for alignment with the PTD framework. Results reveal that children showed high overall PTD engagement, and that teachers and the space supported children’s learning in complementary aspects of PTD. Recommendations for practitioners hoping to design and implement a young children’s makerspace are discussed.",
"title": ""
},
{
"docid": "82708e65107a0877a052ce81294f535c",
"text": "Abstract—Cyber exercises used to assess the preparedness of a community against cyber crises, technology failures and Critical Information Infrastructure (CII) incidents. The cyber exercises also called cyber crisis exercise or cyber drill, involved partnerships or collaboration of public and private agencies from several sectors. This study investigates Organisation Cyber Resilience (OCR) of participation sectors in cyber exercise called X Maya in Malaysia. This study used a principal based cyber resilience survey called CSuite Executive checklist developed by World Economic Forum in 2012. To ensure suitability of the survey to investigate the OCR, the reliability test was conducted on C-Suite Executive checklist items. The research further investigates the differences of OCR in ten Critical National Infrastructure Information (CNII) sectors participated in the cyber exercise. The One Way ANOVA test result showed a statistically significant difference of OCR among ten CNII sectors participated in the cyber exercise.",
"title": ""
},
{
"docid": "641a51f9a5af9fc9dba4be3d12829fd5",
"text": "In this paper, we present a novel SpaTial Attention Residue Network (STAR-Net) for recognising scene texts. The overall architecture of our STAR-Net is illustrated in fig. 1. Our STARNet emphasises the importance of representative image-based feature extraction from text regions by the spatial attention mechanism and the residue learning strategy. It is by far the deepest neural network proposed for scene text recognition.",
"title": ""
},
{
"docid": "625f1f11e627c570e26da9f41f89a28b",
"text": "In this paper, we propose an approach to realize substrate integrated waveguide (SIW)-based leaky-wave antennas (LWAs) supporting continuous beam scanning from backward to forward above the cutoff frequency. First, through phase delay analysis, it was found that SIWs with straight transverse slots support backward and forward radiation of the -1-order mode with an open-stopband (OSB) in between. Subsequently, by introducing additional longitudinal slots as parallel components, the OSB can be suppressed, leading to continuous beam scanning at least from -40° through broadside to 35°. The proposed method only requires a planar structure and obtains less dispersive beam scanning compared with a composite right/left-handed (CRLH) LWA. Both simulations and measurements verify the intended beam scanning operation while verifying the underlying theory.",
"title": ""
},
{
"docid": "837d1ef60937df15afc320b2408ad7b0",
"text": "Zero-shot learning has tremendous application value in complex computer vision tasks, e.g. image classification, localization, image captioning, etc., for its capability of transferring knowledge from seen data to unseen data. Many recent proposed methods have shown that the formulation of a compatibility function and its generalization are crucial for the success of a zero-shot learning model. In this paper, we formulate a softmax-based compatibility function, and more importantly, propose a regularized empirical risk minimization objective to optimize the function parameter which leads to a better model generalization. In comparison to eight baseline models on four benchmark datasets, our model achieved the highest average ranking. Our model was effective even when the training set size was small and significantly outperforming an alternative state-of-the-art model in generalized zero-shot recognition tasks.",
"title": ""
},
{
"docid": "714863ecaa627df1fee3301dde140995",
"text": "Eye movement-based interaction offers the potential of easy, natural, and fast ways of interacting in virtual environments. However, there is little empirical evidence about the advantages or disadvantages of this approach. We developed a new interaction technique for eye movement interaction in a virtual environment and compared it to more conventional 3-D pointing. We conducted an experiment to compare performance of the two interaction types and to assess their impacts on spatial memory of subjects and to explore subjects' satisfaction with the two types of interactions. We found that the eye movement-based interaction was faster than pointing, especially for distant objects. However, subjects' ability to recall spatial information was weaker in the eye condition than the pointing one. Subjects reported equal satisfaction with both types of interactions, despite the technology limitations of current eye tracking equipment.",
"title": ""
},
{
"docid": "7a54331811a4a93df69365b6756e1d5f",
"text": "With object storage services becoming increasingly accepted as replacements for traditional file or block systems, it is important to effectively measure the performance of these services. Thus people can compare different solutions or tune their systems for better performance. However, little has been reported on this specific topic as yet. To address this problem, we present COSBench (Cloud Object Storage Benchmark), a benchmark tool that we are currently working on in Intel for cloud object storage services. In addition, in this paper, we also share the results of the experiments we have performed so far.",
"title": ""
},
{
"docid": "2efb71ffb35bd05c7a124ffe8ad8e684",
"text": "We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.",
"title": ""
},
{
"docid": "45c8f409a5783067b6dce332500d5a88",
"text": "An online learning community enables learners to access up-to-date information via the Internet anytime–anywhere because of the ubiquity of the World Wide Web (WWW). Students can also interact with one another during the learning process. Hence, researchers want to determine whether such interaction produces learning synergy in an online learning community. In this paper, we take the Technology Acceptance Model as a foundation and extend the external variables as well as the Perceived Variables as our model and propose a number of hypotheses. A total of 436 Taiwanese senior high school students participated in this research, and the online learning community focused on learning English. The research results show that all the hypotheses are supported, which indicates that the extended variables can effectively predict whether users will adopt an online learning community. Finally, we discuss the implications of our findings for the future development of online English learning communities. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d798bc49068356495074f92b3bfe7a4b",
"text": "This study presents an experimental evaluation of neural networks for nonlinear time-series forecasting. The e!ects of three main factors * input nodes, hidden nodes and sample size, are examined through a simulated computer experiment. Results show that neural networks are valuable tools for modeling and forecasting nonlinear time series while traditional linear methods are not as competent for this task. The number of input nodes is much more important than the number of hidden nodes in neural network model building for forecasting. Moreover, large sample is helpful to ease the over\"tting problem.",
"title": ""
},
{
"docid": "5dcc5026f959b202240befbe56857ac4",
"text": "When a meta-analysis on results from experimental studies is conducted, differences in the study design must be taken into consideration. A method for combining results across independent-groups and repeated measures designs is described, and the conditions under which such an analysis is appropriate are discussed. Combining results across designs requires that (a) all effect sizes be transformed into a common metric, (b) effect sizes from each design estimate the same treatment effect, and (c) meta-analysis procedures use design-specific estimates of sampling variance to reflect the precision of the effect size estimates.",
"title": ""
},
{
"docid": "bcb615f8bfe9b2b13a4bfe72b698e4c7",
"text": "is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has the right to authorize third party reproduction of this article in print, electronic and database forms. Researchers occasionally have to work with an extremely small sample size, defined herein as N ≤ 5. Some methodologists have cautioned against using the t-test when the sample size is extremely small, whereas others have suggested that using the t-test is feasible in such a case. The present simulation study estimated the Type I error rate and statistical power of the one-and two-sample t-tests for normally distributed populations and for various distortions such as unequal sample sizes, unequal variances, the combination of unequal sample sizes and unequal variances, and a lognormal population distribution. Ns per group were varied between 2 and 5. Results show that the t-test provides Type I error rates close to the 5% nominal value in most of the cases, and that acceptable power (i.e., 80%) is reached only if the effect size is very large. This study also investigated the behavior of the Welch test and a rank-transformation prior to conducting the t-test (t-testR). Compared to the regular t-test, the Welch test tends to reduce statistical power and the t-testR yields false positive rates that deviate from 5%. This study further shows that a paired t-test is feasible with extremely small Ns if the within-pair correlation is high. It is concluded that there are no principal objections to using a t-test with Ns as small as 2. A final cautionary note is made on the credibility of research findings when sample sizes are small. The dictum \" more is better \" certainly applies to statistical inference. According to the law of large numbers, a larger sample size implies that confidence intervals are narrower and that more reliable conclusions can be reached. The reality is that researchers are usually far from the ideal \" mega-trial \" performed with 10,000 subjects (cf. Ioannidis, 2013) and will have to work with much smaller samples instead. For a variety of reasons, such as budget, time, or ethical constraints, it may not be possible to gather a large sample. In some fields of science, such as research on rare animal species, persons having a rare illness, or prodigies scoring at the extreme of an ability distribution (e.g., Ruthsatz & Urbach, 2012), …",
"title": ""
},
{
"docid": "7f3bccab6d6043d3dedc464b195df084",
"text": "This paper introduces a new probabilistic graphical model called gated Bayesian network (GBN). This model evolved from the need to represent processes that include several distinct phases. In essence, a GBN is a model that combines several Bayesian networks (BNs) in such a manner that they may be active or inactive during queries to the model. We use objects called gates to combine BNs, and to activate and deactivate them when predefined logical statements are satisfied. In this paper we also present an algorithm for semi-automatic learning of GBNs. We use the algorithm to learn GBNs that output buy and sell decisions for use in algorithmic trading systems. We show how the learnt GBNs can substantially lower risk towards invested capital, while they at the same time generate similar or better rewards, compared to the benchmark investment strategy buy-and-hold. We also explore some differences and similarities between GBNs and other related formalisms.",
"title": ""
},
{
"docid": "5b2bc42cf2a801dbed78b808fdba894b",
"text": "In this paper, we report the development of a contactless position sensor with thin and planar structures for both sensor and target. The target is designed to be a compact resonator with resonance near the operating frequency, which improves the signal strength and increases the sensing range. The sensor is composed of a source coil and a pair of symmetrically arranged detecting coils. With differential measurement technique, highly accurate edge detection can be realized. Experiment results show that the sensor operates at varying gap size between the target and the sensor, even when the target is at 30 mm away, and the achieved accuracy is within 2% of the size of the sensing coil.",
"title": ""
},
{
"docid": "9871a5673f042b0565c50295be188088",
"text": "Formal security analysis has proven to be a useful tool for tracking modifications in communication protocols in an automated manner, where full security analysis of revisions requires minimum efforts. In this paper, we formally analysed prominent IoT protocols and uncovered many critical challenges in practical IoT settings. We address these challenges by using formal symbolic modelling of such protocols under various adversaries and security goals. Furthermore, this paper extends formal analysis to cryptographic Denial-of-Service (DoS) attacks and demonstrates that a vast majority of IoT protocols are vulnerable to such resource exhaustion attacks. We present a cryptographic DoS attack countermeasure that can be generally used in many IoT protocols. Our study of prominent IoT protocols such as CoAP and MQTT shows the benefits of our approach.",
"title": ""
},
{
"docid": "36be150e997a1fb6b245e8c88688b1b8",
"text": "Restricted Boltzmann Machines (RBMs) are generative models which can learn useful representations from samples of a dataset in an unsupervised fashion. They have been widely employed as an unsupervised pre-training method in machine learning. RBMs have been modified to model time series in two main ways: The Temporal RBM stacks a number of RBMs laterally and introduces temporal dependencies between the hidden layer units; The Conditional RBM, on the other hand, considers past samples of the dataset as a conditional bias and learns a representation which takes these into account. Here we propose a new training method for both the TRBM and the CRBM, which enforces the dynamic structure of temporal datasets. We do so by treating the temporal models as denoising autoencoders, considering past frames of the dataset as corrupted versions of the present frame and minimizing the reconstruction error of the present data by the model. We call this approach Temporal Autoencoding. This leads to a significant improvement in the performance of both models in a filling-in-frames task across a number of datasets. The error reduction for motion capture data is 56% for the CRBM and 80% for the TRBM. Taking the posterior mean prediction instead of single samples further improves the model’s estimates, decreasing the error by as much as 91% for the CRBM on motion capture data. We also trained the model to perform forecasting on a large number of datasets and have found TA pretraining to consistently improve the performance of the forecasts. Furthermore, by looking at the prediction error across time, we can see that this improvement reflects a better representation of the dynamics of the data as opposed to a bias towards reconstructing the observed data on a short time scale. We believe this novel approach of mixing contrastive divergence and autoencoder training yields better models of temporal data, bridging the way towards more robust generative models of time series.",
"title": ""
},
{
"docid": "e4cfcd8bd577fc04480c62bbc6e94a41",
"text": "Background and Objective: Binaural interaction component has been seen to be effective in assessing the binaural interaction process in normal hearing individuals. However, there is a lack of literature regarding the effects of SNHL on the Binaural Interaction Component of ABR. Hence, it is necessary to study binaural interaction occurs at the brainstem when there is an associated hearing impairment. Methods: Three groups of participants in the age range of 30 to 55 years were taken for study i.e. one control group and two experimental groups (symmetrical and asymmetrical hearing loss). The binaural interaction component was determined by subtracting the binaurally evoked auditory potentials from the sum of the monaural auditory evoked potentials: BIC= [{left monaural + right monaural)-binaural}. The latency and amplitude of V peak was estimated for click evoked ABR for monaural and binaural recordings. Results: One way ANOVA revealed a significant difference for binaural interaction component in terms of latency between different groups. One-way ANOVA also showed no significant difference seen between the three different groups in terms of amplitude. Conclusion: The binaural interaction component of auditory brainstem response can be used to evaluate the binaural interaction in symmetrical and asymmetrical hearing loss. This will be helpful to circumvent the effect of peripheral hearing loss in binaural processing of the auditory system. Additionally the test does not require any behavioral cooperation from the client, hence can be administered easily.",
"title": ""
}
] |
scidocsrr
|
1e901be1a2932799e93bdc1415d0e267
|
Text-based emotion classification using emotion cause extraction
|
[
{
"docid": "3bd77be05377f7c5bede15c276e3f856",
"text": "In this paper, we propose a data-oriented method for inferring the emotion of a speaker conversing with a dialog system from the semantic content of an utterance. We first fully automatically obtain a huge collection of emotion-provoking event instances from the Web. With Japanese chosen as a target language, about 1.3 million emotion provoking event instances are extracted using an emotion lexicon and lexical patterns. We then decompose the emotion classification task into two sub-steps: sentiment polarity classification (coarsegrained emotion classification), and emotion classification (fine-grained emotion classification). For each subtask, the collection of emotion-proviking event instances is used as labelled examples to train a classifier. The results of our experiments indicate that our method significantly outperforms the baseline method. We also find that compared with the singlestep model, which applies the emotion classifier directly to inputs, our two-step model significantly reduces sentiment polarity errors, which are considered fatal errors in real dialog applications.",
"title": ""
}
] |
[
{
"docid": "c7192026f1ec61327c3a284fe04c6116",
"text": "The genital examination is not a routine part of health maintenance assessment in prepubertal and pubertal girls. However, evaluation of minors for suspected sexual abuse has been addressed extensively in the last two decades. In spite of this, normal anatomic variations and developmental changes are not fully investigated. This paper reviews current knowledge about the hymen, with a focus on puberty and adolescence. More is known about the external genitals of prepubertal children than of adolescent girls. No longitudinal studies have been performed among girls older than age 3. Tanner staging does not include detailed genital development. A variety of terms have been used to describe the configuration and/or distortion of the hymen: attenuation, clefts, tears and transections, bumps and notches. No studies have been published on the normal variations of the width of the hymenal rim, although an attenuated and/or narrow rim is categorized as consistent with penetrative sexual abuse according to an international consensus statement. Critiques of the literature on the hymen have been published by experts on forensic medicine, emphasizing the fact that the normal hymenal appearance in adolescents still is not well documented. Few studies on hymenal configuration in nonabused adolescent girls have been performed, including girls with and without experience of consensual vaginal intercourse and use of tampons. Longitudinal investigations are required for a better knowledge of female genital development during puberty, with a special focus on vulvar and hymenal anatomy.",
"title": ""
},
{
"docid": "efa566cdd4f5fa3cb12a775126377cb5",
"text": "This paper deals with the electromagnetic emissions of integrated circuits. In particular, four measurement techniques to evaluate integrated circuit conducted emissions are described in detail and they are employed for the measurement of the power supply conducted emission delivered by a simple integrated circuit composed of six synchronous switching drivers. Experimental results obtained by employing such measurement methods are presented and the influence of each test setup on the measured quantities is discussed.",
"title": ""
},
{
"docid": "2b1caf45164e7453453eaaf006dc3827",
"text": "This paper presents an estimation of the longitudinal movement of an aircraft using the STM32 microcontroller F1 Family. The focus of this paper is on developing code to implement the famous Luenberger Observer and using the different devices existing in STM32 F1 micro-controllers. The suggested Luenberger observer was achieved using the Keil development tools designed for devices microcontrollers based on the ARM processor and labor with C / C ++ language. The Characteristics that show variations in time of the state variables and step responses prove that the identification of the longitudinal movement of an aircraft were performed with minor errors in the right conditions. These results lead to easily develop predictive algorithms for programmable hardware in the industry.",
"title": ""
},
{
"docid": "5faa1d3acdd057069fb1dab75d7b0803",
"text": "The past 10 years of event ordering research has focused on learning partial orderings over document events and time expressions. The most popular corpus, the TimeBank, contains a small subset of the possible ordering graph. Many evaluations follow suit by only testing certain pairs of events (e.g., only main verbs of neighboring sentences). This has led most research to focus on specific learners for partial labelings. This paper attempts to nudge the discussion from identifying some relations to all relations. We present new experiments on strongly connected event graphs that contain ∼10 times more relations per document than the TimeBank. We also describe a shift away from the single learner to a sieve-based architecture that naturally blends multiple learners into a precision-ranked cascade of sieves. Each sieve adds labels to the event graph one at a time, and earlier sieves inform later ones through transitive closure. This paper thus describes innovations in both approach and task. We experiment on the densest event graphs to date and show a 14% gain over state-of-the-art.",
"title": ""
},
{
"docid": "9c7f9ff55b02bd53e94df004dcc615b9",
"text": "Support Vector Machines (SVM) is among the most popular classification techniques in machine learning, hence designing fast primal SVM algorithms for large-scale datasets is a hot topic in recent years. This paper presents a new L2norm regularized primal SVM solver using Augmented Lagrange Multipliers, with linear computational cost for Lp-norm loss functions. The most computationally intensive steps (that determine the algorithmic complexity) of the proposed algorithm is purely and simply matrix-byvector multiplication, which can be easily parallelized on a multi-core server for parallel computing. We implement and integrate our algorithm into the interfaces and framework of the well-known LibLinear software toolbox. Experiments show that our algorithm is with stable performance and on average faster than the stateof-the-art solvers such as SVM perf , Pegasos and the LibLinear that integrates the TRON, PCD and DCD algorithms.",
"title": ""
},
{
"docid": "20df093f748b038fcadebcd32c179a4e",
"text": "This research provided a systematic literature review of theoretical models on interaction and collaborations regarding Information system (IS) and Information Technology (IT). This paper conducted an review of studies dedicated to (IS & IT) on the basis of certain dimensions namely, research theories, review of constructivist theories, definitions of constructivism, social constructivism, theoretical of constructivism, active collaborative learning theory, technology acceptance model (TAM), theory of reasoned action, technology acceptance model and Its extensions, and finally research models and frameworks. The discussion of this research obtained revealed that the interest on the topic has shown an increasing trend over recent years that it has ultimately become a well-known topic for academic research in the future via theories use. From review of theoretical models and related theories we recommend to use constructivism, active collaborative learning theory with (TAM) to measurement performance and satisfaction with social media use as the mediator. However, to boost and enhance the IT continuance intention, it is important that future studies apply considerable use of theoretical and methodological approaches like the qualitative methods to examine the IT continuance intention.",
"title": ""
},
{
"docid": "8c50fc49815e406e732f282caba67c7b",
"text": "This paper presents GOM, a language for describing abstract syntax trees and generating a Java implementation for those trees. GOM includes features allowing to specify and modify the interface of the data structure. These features provide in particular the capability to maintain the internal representation of data in canonical form with respect to a rewrite system. This explicitly guarantees that the client program only manipulates normal forms for this rewrite system, a feature which is only implicitly used in many implementations.",
"title": ""
},
{
"docid": "9d8debb624d5981e16d39bae662449cc",
"text": "The use of reinforcement and rewards is known to enhance memory retention. However, the impact of reinforcement on higher-order forms of memory processing, such as integration and generalization, has not been directly manipulated in previous studies. Furthermore, there is evidence that sleep enhances the integration and generalization of memory, but these studies have only used reinforcement learning paradigms and have not examined whether reinforcement impacts or is critical for memory integration and generalization during sleep. Thus, the aims of the current study were to examine: (1) whether reinforcement during learning impacts the integration and generalization of memory; and (2) whether sleep and reinforcement interact to enhance memory integration and generalization. We investigated these questions using a transitive inference (TI) task, which is thought to require the integration and generalization of disparate relational memories in order to make novel inferences. To examine whether reinforcement influences or is required for the formation of inferences, we compared performance using a reinforcement or an observation based TI task. We examined the impact of sleep by comparing performance after a 12-h delay containing either wake or sleep. Our results showed that: (1) explicit reinforcement during learning is required to make transitive inferences and that sleep further enhances this effect; (2) sleep does not make up for the inability to make inferences when reinforcement does not occur during learning. These data expand upon previous findings and suggest intriguing possibilities for the mechanisms involved in sleep-dependent memory transformation.",
"title": ""
},
{
"docid": "9fc7f8ef20cf9c15f9d2d2ce5661c865",
"text": "This paper presents a new iris database that contains images with noise. This is in contrast with the existing databases, that are noise free. UBIRIS is a tool for the development of robust iris recognition algorithms for biometric proposes. We present a detailed description of the many characteristics of UBIRIS and a comparison of several image segmentation approaches used in the current iris segmentation methods where it is evident their small tolerance to noisy images.",
"title": ""
},
{
"docid": "7e682f98ee6323cd257fda07504cba20",
"text": "We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al.,2004) and STARE (Hoover et al.,2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods",
"title": ""
},
{
"docid": "c4d610eb523833a2ded2b0090d6c0337",
"text": "In this paper, I argue that animal domestication, speciesism, and other modern human-animal interactions in North America are possible because of and through the erasure of Indigenous bodies and the emptying of Indigenous lands for settler-colonial expansion. That is, we cannot address animal oppression or talk about animal liberation without naming and subsequently dismantling settler colonialism and white supremacy as political machinations that require the simultaneous exploitation and/or erasure of animal and Indigenous bodies. I begin by re-framing animality as a politics of space to suggest that animal bodies are made intelligible in the settler imagination on stolen, colonized, and re-settled Indigenous lands. Thinking through Andrea Smith’s logics of white supremacy, I then re-center anthropocentrism as a racialized and speciesist site of settler coloniality to re-orient decolonial thought toward animality. To critique the ways in which Indigenous bodies and epistemologies are at stake in neoliberal re-figurings of animals as settler citizens, I reject the colonial politics of recognition developed in Sue Donaldson and Will Kymlicka’s recent monograph, Zoopolis: A Political Theory of Animal Rights (Oxford University Press 2011) because it militarizes settler-colonial infrastructures of subjecthood and governmentality. I then propose a decolonized animal ethic that finds legitimacy in Indigenous cosmologies to argue that decolonization can only be reified through a totalizing disruption of those power apparatuses (i.e., settler colonialism, anthropocentrism, white supremacy, and neoliberal pluralism) that lend the settler state sovereignty, normalcy, and futurity insofar as animality is a settler-colonial particularity.",
"title": ""
},
{
"docid": "db866d876dddb61c4da3ff554e5b6643",
"text": "Distributed stream processing systems need to support stateful processing, recover quickly from failures to resume such processing, and reprocess an entire data stream quickly. We present Apache Samza, a distributed system for stateful and fault-tolerant stream processing. Samza utilizes a partitioned local state along with a low-overhead background changelog mechanism, allowing it to scale to massive state sizes (hundreds of TB) per application. Recovery from failures is sped up by re-scheduling based on Host Affinity. In addition to processing infinite streams of events, Samza supports processing a finite dataset as a stream, from either a streaming source (e.g., Kafka), a database snapshot (e.g., Databus), or a file system (e.g. HDFS), without having to change the application code (unlike the popular Lambdabased architectures which necessitate maintenance of separate code bases for batch and stream path processing). Samza is currently in use at LinkedIn by hundreds of production applications with more than 10, 000 containers. Samza is an open-source Apache project adopted by many top-tier companies (e.g., LinkedIn, Uber, Netflix, TripAdvisor, etc.). Our experiments show that Samza: a) handles state efficiently, improving latency and throughput by more than 100× compared to using a remote storage; b) provides recovery time independent of state size; c) scales performance linearly with number of containers; and d) supports reprocessing of the data stream quickly and with minimal interference on real-time traffic.",
"title": ""
},
{
"docid": "2019018e22e8ebc4c1546c87f36e31e2",
"text": "Many alternative modulation schemes have been investigated to replace OFDM for radio systems. But they all have some weak points. In this paper, we present a novel modulation scheme, which minimizes the predecessors' drawbacks, while still keeping their advantages.",
"title": ""
},
{
"docid": "0d91d17b5b4e8d71777ec28fb5781d64",
"text": "Before we apply nonlinear techniques, e.g. those inspired by chaos theory, to dynamical phenomena occurring in nature, it is necessary to first ask if the use of such advanced techniques is justified by the data. While many processes in nature seem very unlikely a priori to be linear, the possible nonlinear nature might not be evident in specific aspects of their dynamics. The method of surrogate data has become a very popular tool to address such a question. However, while it was meant to provide a statistically rigorous, foolproof framework, some limitations and caveats have shown up in its practical use. In this paper, recent efforts to understand the caveats, avoid the pitfalls, and to overcome some of the limitations, are reviewed and augmented by new material. In particular, we will discuss specific as well as more general approaches to constrained randomisation, providing a full range of examples. New algorithms will be introduced for unevenly sampled and multivariate data and for surrogate spike trains. The main limitation, which lies in the interpretability of the test results, will be illustrated through instructive case studies. We will also discuss some implementational aspects of the realisation of these methods in the TISEAN software package. © 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "c1b8beec6f2cb42b5a784630512525f3",
"text": "Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.",
"title": ""
},
{
"docid": "c9aec633deebe159fa01c7af626d7ae4",
"text": "Many tasks in NLP stand to benefit from robust measures of semantic similarity for units above the level of individual words. Rich semantic resources such as WordNet provide local semantic information at the lexical level. However, effectively combining this information to compute scores for phrases or sentences is an open problem. Our algorithm aggregates local relatedness information via a random walk over a graph constructed from an underlying lexical resource. The stationary distribution of the graph walk forms a “semantic signature” that can be compared to another such distribution to get a relatedness score for texts. On a paraphrase recognition task, the algorithm achieves an 18.5% relative reduction in error rate over a vector-space baseline. We also show that the graph walk similarity between texts has complementary value as a feature for recognizing textual entailment, improving on a competitive baseline system.",
"title": ""
},
{
"docid": "43dad2821c9b8663bc26d86b71362ef5",
"text": "Measures of graph similarity have a broad array of applications, including comparing chemical structures, navigating complex networks like the World Wide Web, and more recently, analyzing different kinds of biological data. This thesis surveys several different notions of similarity, then focuses on an interesting class of iterative algorithms that use the structural similarity of local neighborhoods to derive pairwise similarity scores between graph elements. We have developed a new similarity measure that uses a linear update to generate both node and edge similarity scores and has desirable convergence properties. This thesis also explores the application of our similarity measure to graph matching. We attempt to correctly position a subgraph GB within a graph GA using a maximum weight matching algorithm applied to the similarity scores between GA and GB. Significant performance improvements are observed when the topological information provided by the similarity measure is combined with additional information about the attributes of the graph elements and their local neighborhoods. Matching results are presented for subgraph matching within randomly-generated graphs; an appendix briefly discusses matching applications in the yeast interactome, a graph representing protein-protein interactions within yeast. Thesis Supervisor: George Verghese Title: Professor of Electrical Engineering and Computer Science",
"title": ""
},
{
"docid": "0cbd1230babfc3c426f339801c607d22",
"text": "Despite half a century of fuzzy sets and fuzzy logic progress, as fuzzy sets address complex and uncertain information through the lens of human knowledge and subjectivity, more progress is needed in the semantics of fuzzy sets and in exploring the multi-modal aspect of fuzzy logic due to the different cognitive, emotional and behavioral angles of assessing truth. We lay here the foundations of a postmodern fuzzy set and fuzzy logic theory addressing these issues by deconstructing fuzzy truth values and fuzzy set membership functions to re-capture the human knowledge and subjectivity structure in membership function evaluations. We formulate a fractal multi-modal logic of Kabbalah which integrates the cognitive, emotional and behavioral levels of humanistic systems into epistemic and modal, deontic and doxastic and dynamic multi-modal logic. This is done by creating a fractal multi-modal Kabbalah possible worlds semantic frame of Kripke model type. The Kabbalah possible worlds semantic frame integrates together both the multi-modal logic aspects and their Kripke possible worlds model. We will not focus here on modal operators and axiom sets. We constructively define a fractal multi-modal Kabbalistic L-fuzzy set as the central concept of the postmodern fuzzy set theory based on Kabbalah logic and semantics.",
"title": ""
},
{
"docid": "9e2db834da4eb5d226afec4f8dd58c4c",
"text": "This paper introduces a new hand gesture recognition technique to recognize Arabic sign language alphabet and converts it into voice correspondences to enable Arabian deaf people to interact with normal people. The proposed technique captures a color image for the hand gesture and converts it into YCbCr color space that provides an efficient and accurate way to extract skin regions from colored images under various illumination changes. Prewitt edge detector is used to extract the edges of the segmented hand gesture. Principal Component Analysis algorithm is applied to the extracted edges to form the predefined feature vectors for signs and gestures library. The Euclidean distance is used to measure the similarity between the signs feature vectors. The nearest sign is selected and the corresponding sound clip is played. The proposed technique is used to recognize Arabic sign language alphabets and the most common Arabic gestures. Specifically, we applied the technique to more than 150 signs and gestures with accuracy near to 97% at real time test for three different signers. The detailed of the proposed technique and the experimental results are discussed in this paper.",
"title": ""
},
{
"docid": "1d14a2ff9e8dd162ee2ea80480527eef",
"text": "Feature learning on point clouds has shown great promise, with the introduction of effective and generalizable deep learning frameworks such as pointnet++. Thus far, however, point features have been abstracted in an independent and isolated manner, ignoring the relative layout of neighboring points as well as their features. In the present article, we propose to overcome this limitation by using spectral graph convolution on a local graph, combined with a novel graph pooling strategy. In our approach, graph convolution is carried out on a nearest neighbor graph constructed from a point’s neighborhood, such that features are jointly learned. We replace the standard max pooling step with a recursive clustering and pooling strategy, devised to aggregate information from within clusters of nodes that are close to one another in their spectral coordinates, leading to richer overall feature descriptors. Through extensive experiments on diverse datasets, we show a consistent demonstrable advantage for the tasks of both point set classification and segmentation.",
"title": ""
}
] |
scidocsrr
|
2255e1fb003f3cc7b3e6c8030276c8f9
|
Non-contact video-based pulse rate measurement on a mobile service robot
|
[
{
"docid": "2531d8d05d262c544a25dbffb7b43d67",
"text": "Plethysmographic signals were measured remotely (> 1m) using ambient light and a simple consumer level digital camera in movie mode. Heart and respiration rates could be quantified up to several harmonics. Although the green channel featuring the strongest plethysmographic signal, corresponding to an absorption peak by (oxy-) hemoglobin, the red and blue channels also contained plethysmographic information. The results show that ambient light photo-plethysmography may be useful for medical purposes such as characterization of vascular skin lesions (e.g., port wine stains) and remote sensing of vital signs (e.g., heart and respiration rates) for triage or sports purposes.",
"title": ""
}
] |
[
{
"docid": "44672e9dc60639488800ad4ae952f272",
"text": "The GPS technology and new forms of urban geography have changed the paradigm for mobile services. As such, the abundant availability of GPS traces has enabled new ways of doing taxi business. Indeed, recent efforts have been made on developing mobile recommender systems for taxi drivers using Taxi GPS traces. These systems can recommend a sequence of pick-up points for the purpose of maximizing the probability of identifying a customer with the shortest driving distance. However, in the real world, the income of taxi drivers is strongly correlated with the effective driving hours. In other words, it is more critical for taxi drivers to know the actual driving routes to minimize the driving time before finding a customer. To this end, in this paper, we propose to develop a cost-effective recommender system for taxi drivers. The design goal is to maximize their profits when following the recommended routes for finding passengers. Specifically, we first design a net profit objective function for evaluating the potential profits of the driving routes. Then, we develop a graph representation of road networks by mining the historical taxi GPS traces and provide a Brute-Force strategy to generate optimal driving route for recommendation. However, a critical challenge along this line is the high computational cost of the graph based approach. Therefore, we develop a novel recursion strategy based on the special form of the net profit function for searching optimal candidate routes efficiently. Particularly, instead of recommending a sequence of pick-up points and letting the driver decide how to get to those points, our recommender system is capable of providing an entire driving route, and the drivers are able to find a customer for the largest potential profit by following the recommendations. This makes our recommender system more practical and profitable than other existing recommender systems. Finally, we carry out extensive experiments on a real-world data set collected from the San Francisco Bay area and the experimental results clearly validate the effectiveness of the proposed recommender system.",
"title": ""
},
{
"docid": "6224f4f3541e9cd340498e92a380ad3f",
"text": "A personal story: From philosophy to software.",
"title": ""
},
{
"docid": "da0de29348f5414f33bacad850fa79d1",
"text": "This paper presents a construction algorithm for the short block irregular low-density parity-check (LDPC) codes. By applying a magic square theorem as a part of the matrix construction, a newly developed algorithm, the so-called Magic Square Based Algorithm (MSBA), is obtained. The modified array codes are focused on in this study since the reduction of 1s can lead to simple encoding and decoding schemes. Simulation results based on AWGN channels show that with the code rate of 0.8 and SNR 5 dB, the BER of 10 can be obtained whilst the number of decoding iteration is relatively low.",
"title": ""
},
{
"docid": "d8272965f75b55bafb29c0eb4892f813",
"text": "One expensive step when defining crowdsourcing tasks is to define the examples and control questions for instructing the crowd workers. In this paper, we introduce a self-training strategy for crowdsourcing. The main idea is to use an automatic classifier, trained on weakly supervised data, to select examples associated with high confidence. These are used by our automatic agent to explain the task to crowd workers with a question answering approach. We compared our relation extraction system trained with data annotated (i) with distant supervision and (ii) by workers instructed with our approach. The analysis shows that our method relatively improves the relation extraction system by about 11% in F1.",
"title": ""
},
{
"docid": "2841406ba32b534bb85fb970f2a00e58",
"text": "We present WHATSUP, a collaborative filtering system for disseminating news items in a large-scale dynamic setting with no central authority. WHATSUP constructs an implicit social network based on user profiles that express the opinions of users about the news items they receive (like-dislike). Users with similar tastes are clustered using a similarity metric reflecting long-standing and emerging (dis)interests. News items are disseminated through a novel heterogeneous gossip protocol that (1) biases the orientation of its targets towards those with similar interests, and (2) amplifies dissemination based on the level of interest in every news item. We report on an extensive evaluation of WHATSUP through (a) simulations, (b) a ModelNet emulation on a cluster, and (c) a PlanetLab deployment based on real datasets. We show that WHATSUP outperforms various alternatives in terms of accurate and complete delivery of relevant news items while preserving the fundamental advantages of standard gossip: namely, simplicity of deployment and robustness.",
"title": ""
},
{
"docid": "ecc31d1d7616e014a3a032d14e149e9b",
"text": "It has been proposed that sexual stimuli will be processed in a comparable manner to other evolutionarily meaningful stimuli (such as spiders or snakes) and therefore elicit an attentional bias and more attentional engagement (Spiering and Everaerd, In E. Janssen (Ed.), The psychophysiology of sex (pp. 166-183). Bloomington: Indiana University Press, 2007). To investigate early and late attentional processes while looking at sexual stimuli, heterosexual men (n = 12) viewed pairs of sexually preferred (images of women) and sexually non-preferred images (images of girls, boys or men), while eye movements were measured. Early attentional processing (initial orienting) was assessed by the number of first fixations and late attentional processing (maintenance of attention) was assessed by relative fixation time. Results showed that relative fixation time was significantly longer for sexually preferred stimuli than for sexually non-preferred stimuli. Furthermore, the first fixation was more often directed towards the preferred sexual stimulus, when simultaneously presented with a non-sexually preferred stimulus. Thus, the current study showed for the first time an attentional bias to sexually relevant stimuli when presented simultaneously with sexually irrelevant pictures. This finding, along with the discovery that heterosexual men maintained their attention to sexually relevant stimuli, highlights the importance of investigating early and late attentional processes while viewing sexual stimuli. Furthermore, the current study showed that sexually relevant stimuli are favored by the human attentional system.",
"title": ""
},
{
"docid": "63dc375e505ceb5488a06306775969ba",
"text": "N-Methyl-d-aspartate (NMDA) receptors belong to the family of ionotropic glutamate receptors, which mediate most excitatory synaptic transmission in mammalian brains. Calcium permeation triggered by activation of NMDA receptors is the pivotal event for initiation of neuronal plasticity. Here, we show the crystal structure of the intact heterotetrameric GluN1-GluN2B NMDA receptor ion channel at 4 angstroms. The NMDA receptors are arranged as a dimer of GluN1-GluN2B heterodimers with the twofold symmetry axis running through the entire molecule composed of an amino terminal domain (ATD), a ligand-binding domain (LBD), and a transmembrane domain (TMD). The ATD and LBD are much more highly packed in the NMDA receptors than non-NMDA receptors, which may explain why ATD regulates ion channel activity in NMDA receptors but not in non-NMDA receptors.",
"title": ""
},
{
"docid": "6d227bbf8df90274f44a26d9c269c663",
"text": "Text categorization is a fundamental task in document processing, allowing the automated handling of enormous streams of documents in electronic form. One difficulty in handling some classes of documents is the presence of different kinds of textual errors, such as spelling and grammatical errors in email, and character recognition errors in documents that come through OCR. Text categorization must work reliably on all input, and thus must tolerate some level of these kinds of problems. We describe here an N-gram-based approach to text categorization that is tolerant of textual errors. The system is small, fast and robust. This system worked very well for language classification, achieving in one test a 99.8% correct classification rate on Usenet newsgroup articles written in different languages. The system also worked reasonably well for classifying articles from a number of different computer-oriented newsgroups according to subject, achieving as high as an 80% correct classification rate. There are also several obvious directions for improving the system’s classification performance in those cases where it did not do as well. The system is based on calculating and comparing profiles of N-gram frequencies. First, we use the system to compute profiles on training set data that represent the various categories, e.g., language samples or newsgroup content samples. Then the system computes a profile for a particular document that is to be classified. Finally, the system computes a distance measure between the document’s profile and each of the category profiles. The system selects the category whose profile has the smallest distance to the document’s profile. The profiles involved are quite small, typically 10K bytes for a category training set, and less than 4K bytes for an individual document. Using N-gram frequency profiles provides a simple and reliable way to categorize documents in a wide range of classification tasks.",
"title": ""
},
{
"docid": "86c0547368eb9003beed2ba7eefc75a4",
"text": "Electronic social media offers new opportunities for informal communication in written language, while at the same time, providing new datasets that allow researchers to document dialect variation from records of natural communication among millions of individuals. The unprecedented scale of this data enables the application of quantitative methods to automatically discover the lexical variables that distinguish the language of geographical areas such as cities. This can be paired with the segmentation of geographical space into dialect regions, within the context of a single joint statistical model — thus simultaneously identifying coherent dialect regions and the words that distinguish them. Finally, a diachronic analysis reveals rapid changes in the geographical distribution of these lexical features, suggesting that statistical analysis of social media may offer new insights on the diffusion of lexical change.",
"title": ""
},
{
"docid": "149ffd270f39a330f4896c7d3aa290be",
"text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.",
"title": ""
},
{
"docid": "1b030e734e3ddfb5e612b1adc651b812",
"text": "Clustering1is an essential task in many areas such as machine learning, data mining and computer vision among others. Cluster validation aims to assess the quality of partitions obtained by clustering algorithms. Several indexes have been developed for cluster validation purpose. They can be external or internal depending on the availability of ground truth clustering. This paper deals with the issue of cluster validation of large data set. Indeed, in the era of big data this task becomes even more difficult to handle and requires parallel and distributed approaches. In this work, we are interested in external validation indexes. More specifically, this paper proposes a model for purity based cluster validation in parallel and distributed manner using Map-Reduce paradigm in order to be able to scale with increasing dataset sizes.\n The experimental results show that our proposed model is valid and achieves properly cluster validation of large datasets.",
"title": ""
},
{
"docid": "8d8e5c06269e366044f0e3d5c3be19d0",
"text": "A social network (SN) is a network containing nodes – social entities (people or groups of people) and links between these nodes. Social networks are examples of more general concept of complex networks and SNs are usually free-scale and have power distribution of node degree. Overall, several types of social networks can be enumerated: (i) simple SNs, (ii) multi-layered SNs (with many links between a pair of nodes), (iii) bipartite or multi-modal, heterogeneous SNs (with two or many different types of nodes), (iv) multidimensional SNs (reflecting the data warehousing multidimensional modelling concept), and some more specific like (v) temporal SNs, (vi) large scale SNs, and (vii) virtual SNs. For all these social networks suitable analytical methods may be applied commonly called social network analysis (SNA). They cover in particular: appropriate structural measures, efficient algorithms for their calculation, statistics and data mining methods, e.g. extraction of social communities (clustering). Some types of social networks have their own measures and methods developed. Several real application domains of SNA may be distinguished: classification of nodes for the purpose of marketing, evaluation of organizational structure versus communication structures in companies, recommender systems for hidden knowledge acquisition and for user support in web 2.0, analysis of social groups on web forums and prediction of their evolution. The above SNA methods and applications will be discussed in some details. J. Pokorný, V. Snášel, K. Richta (Eds.): Dateso 2012, pp. 151–151, ISBN 978-80-7378-171-2.",
"title": ""
},
{
"docid": "2f23d51ffd54a6502eea07883709d016",
"text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.",
"title": ""
},
{
"docid": "9998497c000fa194bf414604ff0d69b2",
"text": "By embedding shorting vias, a dual-feed and dual-band L-probe patch antenna, with flexible frequency ratio and relatively small lateral size, is proposed. Dual resonant frequency bands are produced by two radiating patches located in different layers, with the lower patch supported by shorting vias. The measured impedance bandwidths, determined by 10 dB return loss, of the two operating bands reach 26.6% and 42.2%, respectively. Also the radiation patterns are stable over both operating bands. Simulation results are compared well with experiments. This antenna is highly suitable to be used as a base station antenna for multiband operation.",
"title": ""
},
{
"docid": "c340cbb5f6b062caeed570dc2329e482",
"text": "We present a mixed-mode analog/digital VLSI device comprising an array of leaky integrate-and-fire (I&F) neurons, adaptive synapses with spike-timing dependent plasticity, and an asynchronous event based communication infrastructure that allows the user to (re)configure networks of spiking neurons with arbitrary topologies. The asynchronous communication protocol used by the silicon neurons to transmit spikes (events) off-chip and the silicon synapses to receive spikes from the outside is based on the \"address-event representation\" (AER). We describe the analog circuits designed to implement the silicon neurons and synapses and present experimental data showing the neuron's response properties and the synapses characteristics, in response to AER input spike trains. Our results indicate that these circuits can be used in massively parallel VLSI networks of I&F neurons to simulate real-time complex spike-based learning algorithms.",
"title": ""
},
{
"docid": "0fca0826e166ddbd4c26fe16086ff7ec",
"text": "Enteric redmouth disease (ERM) is a serious septicemic bacterial disease of salmonid fish species. It is caused by Yersinia ruckeri, a Gram-negative rod-shaped enterobacterium. It has a wide host range, broad geographical distribution, and causes significant economic losses in the fish aquaculture industry. The disease gets its name from the subcutaneous hemorrhages, it can cause at the corners of the mouth and in gums and tongue. Other clinical signs include exophthalmia, darkening of the skin, splenomegaly and inflammation of the lower intestine with accumulation of thick yellow fluid. The bacterium enters the fish via the secondary gill lamellae and from there it spreads to the blood and internal organs. Y. ruckeri can be detected by conventional biochemical, serological and molecular methods. Its genome is 3.7 Mb with 3406-3530 coding sequences. Several important virulence factors of Y. ruckeri have been discovered, including haemolyin YhlA and metalloprotease Yrp1. Both non-specific and specific immune responses of fish during the course of Y. ruckeri infection have been well characterized. Several methods of vaccination have been developed for controlling both biotype 1 and biotype 2 Y. ruckeri strains in fish. This review summarizes the current state of knowledge regarding enteric redmouth disease and Y. ruckeri: diagnosis, genome, virulence factors, interaction with the host immune responses, and the development of vaccines against this pathogen.",
"title": ""
},
{
"docid": "a5296748b0a93696e7b15f7db9d68384",
"text": "Microscopic analysis of breast tissues is necessary for a definitive diagnosis of breast cancer which is the most common cancer among women. Pathology examination requires time consuming scanning through tissue images under different magnification levels to find clinical assessment clues to produce correct diagnoses. Advances in digital imaging techniques offers assessment of pathology images using computer vision and machine learning methods which could automate some of the tasks in the diagnostic pathology workflow. Such automation could be beneficial to obtain fast and precise quantification, reduce observer variability, and increase objectivity. In this work, we propose to classify breast cancer histopathology images independent of their magnifications using convolutional neural networks (CNNs). We propose two different architectures; single task CNN is used to predict malignancy and multi-task CNN is used to predict both malignancy and image magnification level simultaneously. Evaluations and comparisons with previous results are carried out on BreaKHis dataset. Experimental results show that our magnification independent CNN approach improved the performance of magnification specific model. Our results in this limited set of training data are comparable with previous state-of-the-art results obtained by hand-crafted features. However, unlike previous methods, our approach has potential to directly benefit from additional training data, and such additional data could be captured with same or different magnification levels than previous data.",
"title": ""
},
{
"docid": "1a45d5e0ccc4816c0c64c7e25e7be4e3",
"text": "The interpolation of correspondences (EpicFlow) was widely used for optical flow estimation in most-recent works. It has the advantage of edge-preserving and efficiency. However, it is vulnerable to input matching noise, which is inevitable in modern matching techniques. In this paper, we present a Robust Interpolation method of Correspondences (called RicFlow) to overcome the weakness. First, the scene is over-segmented into superpixels to revitalize an early idea of piecewise flow model. Then, each model is estimated robustly from its support neighbors based on a graph constructed on superpixels. We propose a propagation mechanism among the pieces in the estimation of models. The propagation of models is significantly more efficient than the independent estimation of each model, yet retains the accuracy. Extensive experiments on three public datasets demonstrate that RicFlow is more robust than EpicFlow, and it outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "904c8b4be916745c7d1f0777c2ae1062",
"text": "In this paper, we address the problem of continuous access control enforcement in dynamic data stream environments, where both data and query security restrictions may potentially change in real-time. We present FENCE framework that ffectively addresses this problem. The distinguishing characteristics of FENCE include: (1) the stream-centric approach to security, (2) the symmetric model for security settings of both continuous queries and streaming data, and (3) two alternative security-aware query processing approaches that can optimize query execution based on regular and security-related selectivities. In FENCE, both data and query security restrictions are modeled symmetrically in the form of security metadata, called \"security punctuations\" embedded inside data streams. We distinguish between two types of security punctuations, namely, the data security punctuations (or short, dsps) which represent the access control policies of the streaming data, and the query security punctuations (or short, qsps) which describe the access authorizations of the continuous queries. We also present our encoding method to support XACML(eXtensible Access Control Markup Language) standard. We have implemented FENCE in a prototype DSMS and present our performance evaluation. The results of our experimental study show that FENCE's approach has low overhead and can give great performance benefits compared to the alternative security solutions for streaming environments.",
"title": ""
},
{
"docid": "8fd3c6231e8c8522157439edc7b7344f",
"text": "We are implementing ADAPT, a cognitive architecture for a Pioneer mobile robot, to give the robot the full range of cognitive abilities including perception, use of natural language, learning and the ability to solve complex problems. Our perspective is that an architecture based on a unified theory of robot cognition has the best chance of attaining human-level performance. Existing work in cognitive modeling has accomplished much in the construction of such unified cognitive architectures in areas other than robotics; however, there are major respects in which these architectures are inadequate for robot cognition. This paper examines two major inadequacies of current cognitive architectures for robotics: the absence of support for true concurrency and for active",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.