query_id
stringlengths 32
32
| query
stringlengths 5
4.91k
| positive_passages
listlengths 1
22
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
42fbc64c151714a87558e63ee70bdfea
|
Deep Deterministic Policy Gradient for Urban Traffic Light Control
|
[
{
"docid": "05a4ec72afcf9b724979802b22091fd4",
"text": "Convolutional neural networks (CNNs) have greatly improved state-of-the-art performances in a number of fields, notably computer vision and natural language processing. In this work, we are interested in generalizing the formulation of CNNs from low-dimensional regular Euclidean domains, where images (2D), videos (3D) and audios (1D) are represented, to high-dimensional irregular domains such as social networks or biological networks represented by graphs. This paper introduces a formulation of CNNs on graphs in the context of spectral graph theory. We borrow the fundamental tools from the emerging field of signal processing on graphs, which provides the necessary mathematical background and efficient numerical schemes to design localized graph filters efficient to learn and evaluate. As a matter of fact, we introduce the first technique that offers the same computational complexity than standard CNNs, while being universal to any graph structure. Numerical experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs, as long as the graph is well-constructed.",
"title": ""
}
] |
[
{
"docid": "aada9722cb54130151657a84417d14a1",
"text": "Classical theories of sensory processing view the brain as a passive, stimulus-driven device. By contrast, more recent approaches emphasize the constructive nature of perception, viewing it as an active and highly selective process. Indeed, there is ample evidence that the processing of stimuli is controlled by top–down influences that strongly shape the intrinsic dynamics of thalamocortical networks and constantly create predictions about forthcoming sensory events. We discuss recent experiments indicating that such predictions might be embodied in the temporal structure of both stimulus-evoked and ongoing activity, and that synchronous oscillations are particularly important in this process. Coherence among subthreshold membrane potential fluctuations could be exploited to express selective functional relationships during states of expectancy or attention, and these dynamic patterns could allow the grouping and selection of distributed neuronal responses for further processing.",
"title": ""
},
{
"docid": "1572891f4c2ab064c6d6a164f546e7c1",
"text": "BACKGROUND Unexplained gastrointestinal (GI) symptoms and joint hypermobility (JHM) are common in the general population, the latter described as benign joint hypermobility syndrome (BJHS) when associated with musculo-skeletal symptoms. Despite overlapping clinical features, the prevalence of JHM or BJHS in patients with functional gastrointestinal disorders has not been examined. METHODS The incidence of JHM was evaluated in 129 new unselected tertiary referrals (97 female, age range 16-78 years) to a neurogastroenterology clinic using a validated 5-point questionnaire. A rheumatologist further evaluated 25 patients with JHM to determine the presence of BJHS. Groups with or without JHM were compared for presentation, symptoms and outcomes of relevant functional GI tests. KEY RESULTS Sixty-three (49%) patients had evidence of generalized JHM. An unknown aetiology for GI symptoms was significantly more frequent in patients with JHM than in those without (P < 0.0001). The rheumatologist confirmed the clinical impression of JHM in 23 of 25 patients, 17 (68%) of whom were diagnosed with BJHS. Patients with co-existent BJHS and GI symptoms experienced abdominal pain (81%), bloating (57%), nausea (57%), reflux symptoms (48%), vomiting (43%), constipation (38%) and diarrhoea (14%). Twelve of 17 patients presenting with upper GI symptoms had delayed gastric emptying. One case is described in detail. CONCLUSIONS & INFERENCES In a preliminary retrospective study, we have found a high incidence of JHM in patients referred to tertiary neurogastroenterology care with unexplained GI symptoms and in a proportion of these a diagnosis of BJHS is made. Symptoms and functional tests suggest GI dysmotility in a number of these patients. The possibility that a proportion of patients with unexplained GI symptoms and JHM may share a common pathophysiological disorder of connective tissue warrants further investigation.",
"title": ""
},
{
"docid": "c6d3f20e9d535faab83fb34cec0fdb5b",
"text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1",
"title": ""
},
{
"docid": "0ee744ad3c75f7bb9695c47165d87043",
"text": "Clustering is a critical component of many data analysis tasks, but is exceedingly difficult to fully automate. To better incorporate domain knowledge, researchers in machine learning, human-computer interaction, visualization, and statistics have independently introduced various computational tools to engage users through interactive clustering. In this work-in-progress paper, we present a cross-disciplinary literature survey, and find that existing techniques often do not meet the needs of real-world data analysis. Semi-supervised machine learning algorithms often impose prohibitive user interaction costs or fail to account for external analysis requirements. Human-centered approaches and user interface designs often fall short because of their insufficient statistical modeling capabilities. Drawing on effective approaches from each field, we identify five characteristics necessary to support effective human-in-the-loop interactive clustering: iterative, multi-objective, local updates that can operate on any initial clustering and a dynamic set of features. We outline key aspects of our technique currently under development, and share our initial evidence suggesting that all five design considerations can be incorporated into a single algorithm. We plan to demonstrate our technique on three data analysis tasks: feature engineering for classification, exploratory analysis of biomedical data, and multi-document summarization.",
"title": ""
},
{
"docid": "4c102cb77b3992f6cb29a117994804eb",
"text": "These current studies explored the impact of individual differences in personality factors on interface interaction and learning performance behaviors in both an interactive visualization and a menu-driven web table in two studies. Participants were administered 3 psychometric measures designed to assess Locus of Control, Extraversion, and Neuroticism. Participants were then asked to complete multiple procedural learning tasks in each interface. Results demonstrated that all three measures predicted completion times. Additionally, results analyses demonstrated personality factors also predicted the number of insights participants reported while completing the tasks in each interface. We discuss how these findings advance our ongoing research in the Personal Equation of Interaction.",
"title": ""
},
{
"docid": "435200b067ebd77f69a04cc490d73fa6",
"text": "Self-mutilation of genitalia is an extremely rare entity, usually found in psychotic patients. Klingsor syndrome is a condition in which such an act is based upon religious delusions. The extent of genital mutilation can vary from superficial cuts to partial or total amputation of penis to total emasculation. The management of these patients is challenging. The aim of the treatment is restoration of the genital functionality. Microvascular reanastomosis of the phallus is ideal but it is often not possible due to the delay in seeking medical attention, non viability of the excised phallus or lack of surgical expertise. Hence, it is not unusual for these patients to end up with complete loss of the phallus and a perineal urethrostomy. We describe a patient with Klingsor syndrome who presented to us with near total penile amputation. The excised phallus was not viable and could not be used. The patient was managed with surgical reconstruction of the penile stump which was covered with loco-regional flaps. The case highlights that a functional penile reconstruction is possible in such patients even when microvascular reanastomosis is not feasible. This technique should be attempted before embarking upon perineal urethrostomy.",
"title": ""
},
{
"docid": "03a6425423516d0f978bb5f8abe0d62d",
"text": "Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence/robotics communities. We will argue that the attempts to allow machines to make ethical decisions or to have rights are misguided. Instead we propose a new science of safety engineering for intelligent artificial agents. In particular we issue a challenge to the scientific community to develop intelligent systems capable of proving that they are in fact safe even under recursive selfimprovement.",
"title": ""
},
{
"docid": "6d471fcfa68cfb474f2792892e197a66",
"text": "The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.",
"title": ""
},
{
"docid": "5956e9399cfe817aa1ddec5553883bef",
"text": "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.",
"title": ""
},
{
"docid": "bdf16e241e4d33af64b7dd5a97873a2c",
"text": "Although lentils (Lens culinaris L) contain several bioactive compounds that have been linked to the prevention of cancer, the in vivo chemopreventive ability of lentils against chemically induced colorectal cancer has not been examined. Our present study examined the hypothesis that lentils could suppress the early carcinogenesis in vivo by virtue of their bioactive micro- and macroconstituents and that culinary thermal treatment could affect their chemopreventive potential. To accomplish this goal, we used raw whole lentils (RWL), raw split lentils (RSL), cooked whole lentils (CWL), and cooked split lentils (CSL). Raw soybeans (RSB; Glycine max) were used for the purpose of comparison with a well-studied chemopreventive agent. Sixty weanling Fischer 344 male rats, 4 to 5 weeks of age, were randomly assigned to 6 groups (10 rats/group): the control group (C) received AIN-93G diet, and treatment leguminous groups of RWL, CWL, RSL, CSL, and RSB received the treatment diets containing AIN-93G+5% of the above-mentioned legumes. After acclimatization for 1 week (at 5th to 6th week of age), all animals were put on the control and treatment diets separately for 5 weeks (from 6th to 11th week of age). At the end of the 5th week of feeding (end of 11th week of age), all rats received 2 subcutaneous injections of azoxymethane carcinogen at 15 mg/kg rat body weight per dose once a week for 2 consecutive weeks. After 17 weeks of the last azoxymethane injection (from 12th to 29th week of age), all rats were euthanized. Chemopreventive ability was assessed using colonic aberrant crypt foci and activity of hepatic glutathione-S-transferases. Significant reductions (P < .05) were found in total aberrant crypt foci number (mean +/- SEM) for RSB (27.33 +/- 4.32), CWL (33.44 +/- 4.56), and RSL (37.00 +/- 6.02) in comparison with the C group (58.33 +/- 8.46). Hepatic glutathione-S-transferases activities increased significantly (P < .05) in rats fed all treatment diets (from 51.38 +/- 3.66 to 67.94 +/- 2.01 micromol mg(-1) min(-1)) when compared with control (C) diet (26.13 +/- 1.01 micromol mg(-1) min(-1)). Our findings indicate that consumption of lentils might be protective against colon carcinogenesis and that hydrothermal treatment resulted in an improvement in the chemopreventive potential for the whole lentils.",
"title": ""
},
{
"docid": "dd51d7253e6e249980e4f1f945f93c84",
"text": "In real-time strategy games like StarCraft, skilled players often block the entrance to their base with buildings to prevent the opponent’s units from getting inside. This technique, called “walling-in”, is a vital part of player’s skill set, allowing him to survive early aggression. However, current artificial players (bots) do not possess this skill, due to numerous inconveniences surfacing during its implementation in imperative languages like C++ or Java. In this text, written as a guide for bot programmers, we address the problem of finding an appropriate building placement that would block the entrance to player’s base, and present a ready to use declarative solution employing the paradigm of answer set programming (ASP). We also encourage the readers to experiment with different declarative approaches to this problem.",
"title": ""
},
{
"docid": "78a0898f35113547cdc3adb567ad7afb",
"text": "Phishing is a form of online identity theft. Phishers use social engineering to steal victims' personal identity data and financial account credentials. Social engineering schemes use spoofed e-mails to lure unsuspecting victims into counterfeit websites designed to trick recipients into divulging financial data such as credit card numbers, account usernames, passwords and social security numbers. This is called a deceptive phishing attack. In this paper, a thorough overview of a deceptive phishing attack and its countermeasure techniques, which is called anti-phishing, is presented. Firstly, technologies used by phishers and the definition, classification and future works of deceptive phishing attacks are discussed. Following with the existing anti-phishing techniques in literatures and research-stage technologies are shown, and a thorough analysis which includes the advantages and shortcomings of countermeasures is given. At last, we show the research of why people fall for phishing attack.",
"title": ""
},
{
"docid": "cce5d75bfcfc22f7af08f6b0b599d472",
"text": "In order to determine if exposure to carcinogens in fire smoke increases the risk of cancer, we examined the incidence of cancer in a cohort of 2,447 male firefighters in Seattle and Tacoma, (Washington, USA). The study population was followed for 16 years (1974–89) and the incidence of cancer, ascertained using a population-based tumor registry, was compared with local rates and with the incidence among 1,878 policemen from the same cities. The risk of cancer among firefighters was found to be similar to both the police and the general male population for most common sites. An elevated risk of prostate cancer was observed relative to the general population (standardized incidence ratio [SIR]=1.4, 95 percent confidence interval [CI]=1.1–1.7) but was less elevated compared with rates in policement (incidence density ratio [IDR]=1.1, CI=0.7–1.8) and was not related to duration of exposure. The risk of colon cancer, although only slightly elevated relative to the general population (SIR=1.1, CI=0.7–1.6) and the police (IDR=1.3, CI=0.6–3.0), appeared to increase with duration of employment. Although the relationship between firefighting and colon cancer is consistent with some previous studies, it is based on small numbers and may be due to chance. While this study did not find strong evidence for an excess risk of cancer, the presence of carcinogens in the firefighting environment warrants periodic re-evaluation of cancer incidence in this population and the continued use of protective equipment.",
"title": ""
},
{
"docid": "b3840c076852c5bc9a2f50e1a1938780",
"text": "The rapid progress in medical and technical innovations in the neonatal intensive care unit (NICU) has been accompanied by concern for outcomes of NICU graduates. Although advances in neonatal care have led to significant changes in survival rates of very small and extremely preterm neonates, early feeding difficulties with the transition from tube feeding to oral feeding are prominent and often persist beyond discharge to home. Progress in learning to feed in the NICU and continued growth in feeding skills after the NICU may be closely tied to fostering neuroprotection and safety. The experience of learning to feed in the NICU may predispose preterm neonates to feeding problems that persist. Neonatal feeding as an area of specialized clinical practice has grown considerably in the last decade. This article is the first in a two-part series devoted to neonatal feeding. Part 1 explores factors in NICU feeding experiences that may serve to constrain or promote feeding skill development, not only in the NICU but long after discharge to home. Part II describes approaches to intervention that support neuroprotection and safety.",
"title": ""
},
{
"docid": "80b86f424d8f99a28f0bd4d16a89fe3d",
"text": "Programming is traditionally taught using a bottom-up approach, where details of syntax and implementation of data structures are the predominant concepts. The top-down approach proposed focuses instead on understanding the abstractions represented by the classical data structures without regard to their physical implementation. Only after the students are comfortable with the behavior and applications of the major data structures do they learn about their implementations or the basic data types like arrays and pointers that are used. This paper discusses the benefits of such an approach and how it is being used in a Computer Science curriculum.",
"title": ""
},
{
"docid": "430d74071a8b399675d10d43b3b337ac",
"text": "Machine learning systems can often achieve high performance on a test set by relying on heuristics that are effective for frequent example types but break down in more challenging cases. We study this issue within natural language inference (NLI), the task of determining whether one sentence entails another. Based on an analysis of the task, we hypothesize three fallible syntactic heuristics that NLI models are likely to adopt: the lexical overlap heuristic, the subsequence heuristic, and the constituent heuristic. To determine whether models have adopted these heuristics, we introduce a controlled evaluation set called HANS (Heuristic Analysis for NLI Systems), which contains many examples where the heuristics fail. We find that models trained on MNLI, including the state-of-the-art model BERT, perform very poorly on HANS, suggesting that they have indeed adopted these heuristics. We conclude that there is substantial room for improvement in NLI systems, and that the HANS dataset can motivate and measure progress in this area.",
"title": ""
},
{
"docid": "dfc44cd25a729035e93dbd1a04806510",
"text": "Recommender systems are firmly established as a standard technology for assisting users with their choices; however, little attention has been paid to the application of the user model in recommender systems, particularly the variability and noise that are an intrinsic part of human behavior and activity. To enable recommender systems to suggest items that are useful to a particular user, it can be essential to understand the user and his or her interactions with the system. These interactions typically manifest themselves as explicit and implicit user feedback that provides the key indicators for modeling users’ preferences for items and essential information for personalizing recommendations. In this article, we propose a classification framework for the use of explicit and implicit user feedback in recommender systems based on a set of distinct properties that include Cognitive Effort, User Model, Scale of Measurement, and Domain Relevance. We develop a set of comparison criteria for explicit and implicit user feedback to emphasize the key properties. Using our framework, we provide a classification of recommender systems that have addressed questions about user feedback, and we review state-of-the-art techniques to improve such user feedback and thereby improve the performance of the recommender system. Finally, we formulate challenges for future research on improvement of user feedback.",
"title": ""
},
{
"docid": "e9006af64364e6dcd1ea4684642539de",
"text": "Since the publication of the PDP volumes in 1986,1 learning by backpropagation has become the most popular method of training neural networks. The reason for the popularity is the underlying simplicity and relative power of the algorithm. Its power derives from the fact that, unlike its precursors, the perceptron learning rule and the Widrow-Hoff learning rule, it can be employed for training nonlinear networks of arbitrary connectivity. Since such networks are often required for real-world applications, such a learning procedure is critical. Nearly as important as its power in explaining its popularity is its simplicity. The basic igea is old and simple; namely define an error function and use hill climbing (or gradient descent if you prefer going downhill) to find a set of weights which optimize performance on a particular task. The algorithm is so simple that it can be implemented in a few lines' of code, and there have been no doubt many thousands of implementations of the algorithm by now. The name back propagation actually comes from the term employed by Rosenblatt (1962) for his attempt to generalize the perceptron learning algorithm to the multilayer case. There were many attempts to generalize the perceptron learning procedure to multiple layers during the 1960s and 1970s, but none of them were especially successful. There appear to have been at least three independent inventions of the modem version of the back-propagation algorithm: Paul Werbos developed the basic idea in 1974 in a Ph.D. dissertation entitled",
"title": ""
},
{
"docid": "d1c4e0da79ceb8893f63aa8ea7c8041c",
"text": "This paper describes the GOLD (Generic Obstacle and Lane Detection) system, a stereo vision-based hardware and software architecture developed to increment road safety of moving vehicles: it allows to detect both generic obstacles (without constraints on symmetry or shape) and the lane position in a structured environment (with painted lane markings). It has been implemented on the PAPRICA system and works at a rate of 10 Hz.",
"title": ""
},
{
"docid": "19339fa01942ad3bf33270aa1f6ceae2",
"text": "This study investigated query formulations by users with {\\it Cognitive Search Intents} (CSIs), which are users' needs for the cognitive characteristics of documents to be retrieved, {\\em e.g. comprehensibility, subjectivity, and concreteness. Our four main contributions are summarized as follows (i) we proposed an example-based method of specifying search intents to observe query formulations by users without biasing them by presenting a verbalized task description;(ii) we conducted a questionnaire-based user study and found that about half our subjects did not input any keywords representing CSIs, even though they were conscious of CSIs;(iii) our user study also revealed that over 50\\% of subjects occasionally had experiences with searches with CSIs while our evaluations demonstrated that the performance of a current Web search engine was much lower when we not only considered users' topical search intents but also CSIs; and (iv) we demonstrated that a machine-learning-based query expansion could improve the performances for some types of CSIs.Our findings suggest users over-adapt to current Web search engines,and create opportunities to estimate CSIs with non-verbal user input.",
"title": ""
}
] |
scidocsrr
|
5f8cd134cbf9965c9a961a4bebcc312d
|
An Agile Approach to Building RISC-V Microprocessors
|
[
{
"docid": "35f4a8131a27298b1aa04859450e6620",
"text": "Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems—from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic–photonic systems enabled by silicon-based nanophotonic devices8. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic–photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic–photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a ‘zero-change’ approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic–photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.",
"title": ""
}
] |
[
{
"docid": "cf751df3c52306a106fcd00eef28b1a4",
"text": "Mul-T is a parallel Lisp system, based on Multilisp's future construct, that has been developed to run on an Encore Multimax multiprocessor. Mul-T is an extended version of the Yale T system and uses the T system's ORBIT compiler to achieve “production quality” performance on stock hardware — about 100 times faster than Multilisp. Mul-T shows that futures can be implemented cheaply enough to be useful in a production-quality system. Mul-T is fully operational, including a user interface that supports managing groups of parallel tasks.",
"title": ""
},
{
"docid": "dae877409dca88fc6fed5cf6536e65ad",
"text": "My 1971 Turing Award Lecture was entitled \"Generality in Artificial Intelligence.\" The topic turned out to have been overambitious in that I discovered I was unable to put my thoughts on the subject in a satisfactory written form at that time. It would have been better to have reviewed my previous work rather than attempt something new, but such was not my custom at that time.\nI am grateful to ACM for the opportunity to try again. Unfortunately for our science, although perhaps fortunately for this project, the problem of generality in artificial intelligence (AI) is almost as unsolved as ever, although we now have many ideas not available in 1971. This paper relies heavily on such ideas, but it is far from a full 1987 survey of approaches for achieving generality. Ideas are therefore discussed at a length proportional to my familiarity with them rather than according to some objective criterion.\nIt was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality. It is still obvious; there are many more details. The first gross symptom is that a small addition to the idea of a program often involves a complete rewrite beginning with the data structures. Some progress has been made in modularizing data structures, but small modifications of the search strategies are even less likely to be accomplished without rewriting.\nAnother symptom is no one knows how to make a general database of commonsense knowledge that could be used by any program that needed the knowledge. Along with other information, such a database would contain what a robot would need to know about the effects of moving objects around, what a person can be expected to know about his family, and the facts about buying and selling. This does not depend on whether the knowledge is to be expressed in a logical language or in some other formalism. When we take the logic approach to AI, lack of generality shows up in that the axioms we devise to express commonsense knowledge are too restricted in their applicability for a general commonsense database. In my opinion, getting a language for expressing general commonsense knowledge for inclusion in a general database is the key problem of generality in AI.\nHere are some ideas for achieving generality proposed both before and after 1971. I repeat my disclaimer of comprehensiveness.",
"title": ""
},
{
"docid": "25cd669a4fcf62ff56669bff22974634",
"text": "In this paper, we introduce a novel framework for combining scientific knowledge within physicsbased models and recurrent neural networks to advance scientific discovery in many dynamical systems. We will first describe the use of outputs from physics-based models in learning a hybrid-physics-data model. Then, we further incorporate physical knowledge in real-world dynamical systems as additional constraints for training recurrent neural networks. We will apply this approach on modeling lake temperature and quality where we take into account the physical constraints along both the depth dimension and time dimension. By using scientific knowledge to guide the construction and learning the data-driven model, we demonstrate that this method can achieve better prediction accuracy as well as scientific consistency of results.",
"title": ""
},
{
"docid": "1feaf48291b7ea83d173b70c23a3b7c0",
"text": "Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).",
"title": ""
},
{
"docid": "4f64b2b2b50de044c671e3d0d434f466",
"text": "Optical flow estimation is one of the oldest and still most active research domains in computer vision. In 35 years, many methodological concepts have been introduced and have progressively improved performances , while opening the way to new challenges. In the last decade, the growing interest in evaluation benchmarks has stimulated a great amount of work. In this paper, we propose a survey of optical flow estimation classifying the main principles elaborated during this evolution, with a particular concern given to recent developments. It is conceived as a tutorial organizing in a comprehensive framework current approaches and practices. We give insights on the motivations, interests and limitations of modeling and optimization techniques, and we highlight similarities between methods to allow for a clear understanding of their behavior. Motion analysis is one of the main tasks of computer vision. From an applicative viewpoint, the information brought by the dynamical behavior of observed objects or by the movement of the camera itself is a decisive element for the interpretation of observed phenomena. The motion characterizations can be extremely variable among the large number of application domains. Indeed, one can be interested in tracking objects, quantifying deformations, retrieving dominant motion, detecting abnormal behaviors, and so on. The most low-level characterization is the estimation of a dense motion field, corresponding to the displacement of each pixel, which is called optical flow. Most high-level motion analysis tasks employ optical flow as a fundamental basis upon which more semantic interpretation is built. Optical flow estimation has given rise to a tremendous quantity of works for 35 years. If a certain continuity can be found since the seminal works of [120,170], a number of methodological innovations have progressively changed the field and improved performances. Evaluation benchmarks and applicative domains have followed this progress by proposing new challenges allowing methods to face more and more difficult situations in terms of motion discontinuities, large displacements, illumination changes or computational costs. Despite great advances, handling these issues in a unique method still remains an open problem. Comprehensive surveys of optical flow literature were carried out in the nineties [21,178,228]. More recently, reviewing works have focused on variational approaches [264], benchmark results [13], specific applications [115], or tutorials restricted to a certain subset of methods [177,260]. However, covering all the main estimation approaches and including recent developments in a comprehensive classification is still lacking in the optical flow field. This survey …",
"title": ""
},
{
"docid": "77d73cf3aa583e12cc102f48be184100",
"text": "The combinatorial cross-regulation of hundreds of sequence-specific transcription factors (TFs) defines a regulatory network that underlies cellular identity and function. Here we use genome-wide maps of in vivo DNaseI footprints to assemble an extensive core human regulatory network comprising connections among 475 sequence-specific TFs and to analyze the dynamics of these connections across 41 diverse cell and tissue types. We find that human TF networks are highly cell selective and are driven by cohorts of factors that include regulators with previously unrecognized roles in control of cellular identity. Moreover, we identify many widely expressed factors that impact transcriptional regulatory networks in a cell-selective manner. Strikingly, in spite of their inherent diversity, all cell-type regulatory networks independently converge on a common architecture that closely resembles the topology of living neuronal networks. Together, our results provide an extensive description of the circuitry, dynamics, and organizing principles of the human TF regulatory network.",
"title": ""
},
{
"docid": "427ebc0500e91e842873c4690cdacf79",
"text": "Bounding volume hierarchy (BVH) has been widely adopted as the acceleration structure in broad-phase collision detection. Previous state-of-the-art BVH-based collision detection approaches exploited the spatio-temporal coherence of simulations by maintaining a bounding volume test tree (BVTT) front. A major drawback of these algorithms is that large deformations in the scenes decrease culling efficiency and slow down collision queries. Moreover, for front-based methods, the inefficient caching on GPU caused by the arbitrary layout of BVH and BVTT front nodes becomes a critical performance issue. We present a fast and robust BVH-based collision detection scheme on GPU that addresses the above problems by ordering and restructuring BVHs and BVTT fronts. Our techniques are based on the use of histogram sort and an auxiliary structure BVTT front log, through which we analyze the dynamic status of BVTT front and BVH quality. Our approach efficiently handles interand intra-object collisions and performs especially well in simulations where there is considerable spatio-temporal coherence. The benchmark results demonstrate that our approach is significantly faster than the previous BVH-based method, and also outperforms other state-of-the-art spatial subdivision schemes in terms of speed. CCS Concepts •Computing methodologies → Collision detection; Physical simulation;",
"title": ""
},
{
"docid": "eff7d3775d12687c81ae91b130c7c562",
"text": "We propose a novel approach for sparse probabilistic principal component analysis, that combines a low rank representation for the latent factors and loadings with a novel sparse variational inference approach for estimating distributions of latent variables subject to sparse support constraints. Inference and parameter estimation for the resulting model is achieved via expectation maximization with a novel variational inference method for the E-step that induces sparsity. We show that this inference problem can be reduced to discrete optimal support selection. The discrete optimization is submodular, hence, greedy selection is guaranteed to achieve 1-1/e fraction of the optimal. Empirical studies indicate effectiveness of the proposed approach for the recovery of a parsimonious decomposition as compared to established baseline methods. We also evaluate our method against state-of-the-art methods on high dimensional fMRI data, and show that the method performs as well as or better than other methods.",
"title": ""
},
{
"docid": "2855a1f420ed782317c1598c9d9c185e",
"text": "Ranking authors is vital for identifying a researcher’s impact and his standing within a scientific field. There are many different ranking methods (e.g., citations, publications, h-index, PageRank, and weighted PageRank), but most of them are topic-independent. This paper proposes topic-dependent ranks based on the combination of a topic model and a weighted PageRank algorithm. The Author-Conference-Topic (ACT) model was used to extract topic distribution of individual authors. Two ways for combining the ACT model with the PageRank algorithm are proposed: simple combination (I_PR) or using a topic distribution as a weighted vector for PageRank (PR_t). Information retrieval was chosen as the test field and representative authors for different topics at different time phases were identified. Principal Component Analysis (PCA) was applied to analyze the ranking difference between I_PR and PR_t.",
"title": ""
},
{
"docid": "091eedcd69373f99419a745f2215e345",
"text": "Society is increasingly reliant upon complex and interconnected cyber systems to conduct daily life activities. From personal finance to managing defense capabilities to controlling a vast web of aircraft traffic, digitized information systems and software packages have become integrated at virtually all levels of individual and collective activity. While such integration has been met with immense increases in efficiency of service delivery, it has also been subject to a diverse body of threats from nefarious hackers, groups, and even state government bodies. Such cyber threats have shifted over time to affect various cyber functionalities, such as with Direct Denial of Service (DDoS), data theft, changes to data code, infection via computer virus, and many others.",
"title": ""
},
{
"docid": "8acfcaaa00cbfe275f6809fdaa3c6a78",
"text": "Internet usage has drastically shifted from host-centric end-to-end communication to receiver-driven content retrieval. In order to adapt to this change, a handful of innovative information/content centric networking (ICN) architectures have recently been proposed. One common and important feature of these architectures is to leverage built-in network caches to improve the transmission efficiency of content dissemination. Compared with traditional Web Caching and CDN Caching, ICN Cache takes on several new characteristics: cache is transparent to applications, cache is ubiquitous, and content to be cached is more ine-grained. These distinguished features pose new challenges to ICN caching technologies. This paper presents a comprehensive survey of state-of-art techniques aiming to address these issues, with particular focus on reducing cache redundancy and improving the availability of cached content. As a new research area, this paper also points out several interesting yet challenging research directions in this subject. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "33bb646417d0ebbe01747b97323df5d0",
"text": "Semantic search or text-to-video search in video is a novel and challenging problem in information and multimedia retrieval. Existing solutions are mainly limited to text-to-text matching, in which the query words are matched against the user-generated metadata. This kind of text-to-text search, though simple, is of limited functionality as it provides no understanding about the video content. This paper presents a state-of-the-art system for event search without any user-generated metadata or example videos, known as text-to-video search. The system relies on substantial video content understanding and allows for searching complex events over a large collection of videos. The proposed text-to-video search can be used to augment the existing text-to-text search for video. The novelty and practicality are demonstrated by the evaluation in NIST TRECVID 2014, where the proposed system achieves the best performance. We share our observations and lessons in building such a state-of-the-art system, which may be instrumental in guiding the design of the future system for video search and analysis.",
"title": ""
},
{
"docid": "ce3f09b04cc8a5445e009d65169f1ad1",
"text": "Current methods in treating chronic wounds have had limited success in large part due to the open loop nature of the treatment. We have created a localized 3D-printed smart wound dressing platform that will allow for real-time data acquisition of oxygen concentration, which is an important indicator of wound healing. This will serve as the first leg of a feedback loop for a fully optimized treatment mechanism tailored to the individual patient. A flexible oxygen sensor was designed and fabricated with high sensitivity and linear current output. With a series of off-the-shelf electronic components including a programmable-gain analog front-end, a microcontroller and wireless radio, an integrated electronic system with data readout and wireless transmission capabilities was assembled in a compact package. Using an elastomeric material, a bandage with exceptional flexibility and tensile strength was 3D-printed. The bandage contains cavities for both the oxygen sensor and the electronic systems, with contacts interfacing the two systems. Our integrated, flexible platform is the first step toward providing a self-operating, highly optimized remote therapy for chronic wounds.",
"title": ""
},
{
"docid": "135b9476e787624b899686664b03e6a1",
"text": "Amyotrophic lateral sclerosis (ALS) is the most common neurodegenerative disease of the motor system. Bulbar symptoms such as dysphagia and dysarthria are frequent features of ALS and can result in reductions in life expectancy and quality of life. These dysfunctions are assessed by clinical examination and by use of instrumented methods such as fiberendoscopic evaluation of swallowing and videofluoroscopy. Laryngospasm, another well-known complication of ALS, commonly comes to light during intubation and extubation procedures in patients undergoing surgery. Laryngeal and pharyngeal complications are treated by use of an array of measures, including body positioning, compensatory techniques, voice and breathing exercises, communication devices, dietary modifications, various safety strategies, and neuropsychological assistance. Meticulous monitoring of clinical symptoms and close cooperation within a multidisciplinary team (physicians, speech and language therapists, occupational therapists, dietitians, caregivers, the patients and their relatives) are vital.",
"title": ""
},
{
"docid": "cd56f2a6a5187476c8e63370a14c0dd0",
"text": "This complex infection has a number of objective manifestations, including a characteristic skin lesion called erythema migrans (the most common presentation of early Lyme disease), certain neurologic and cardiac manifestations, and pauciarticular arthritis (the most common presentation of late Lyme disease), all of which usually respond well to conventional antibiotic therapy. Despite resolution of the objective manifestations of infection after antibiotic treatment, a minority of patients have fatigue, musculoskeletal pain, difficulties with concentration or short-term memory, or all of these symptoms. In this article, we refer to these usually mild and self-limiting subjective symptoms as “post–Lyme disease symptoms,” and if they last longer than 6 months, we call them “post–Lyme disease syndrome.”",
"title": ""
},
{
"docid": "99d57cef03e21531be9f9663ec023987",
"text": "Anton Schwartz Dept. of Computer Science Stanford University Stanford, CA 94305 Email: [email protected] Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.",
"title": ""
},
{
"docid": "9ff912ad71c84cfba286f1be7bd8d4b3",
"text": "This article compares traditional industrial-organizational psychology (I-O) research published in Journal of Applied Psychology (JAP) with organizational behavior management (OBM) research published in Journal of Organizational Behavior Management (JOBM). The purpose of this comparison was to identify similarities and differences with respect to research topics and methodologies, and to offer suggestions for what OBM researchers and practitioners can learn from I-O. Articles published in JAP from 1987-1997 were reviewed and compared to articles published during the same decade in JOBM (Nolan, Jarema, & Austin, 1999). This comparison includes Barbara R. Bucklin, Alicia M. Alvero, Alyce M. Dickinson, John Austin, and Austin K. Jackson are affiliated with Western Michigan University. Address correspondence to Alyce M. Dickinson, Department of Psychology, Western Michigan University, Kalamazoo, MI 49008-5052 (E-mail: alyce.dickinson@ wmich.edu.) Journal of Organizational Behavior Management, Vol. 20(2) 2000 E 2000 by The Haworth Press, Inc. All rights reserved. 27 D ow nl oa de d by [ W es te rn M ic hi ga n U ni ve rs ity ] at 1 1: 14 0 3 Se pt em be r 20 12 JOURNAL OF ORGANIZATIONAL BEHAVIOR MANAGEMENT 28 (a) author characteristics, (b) authors published in both journals, (c) topics addressed, (d) type of article, and (e) research characteristics and methodologies. Among the conclusions are: (a) the primary relative strength of OBM is its practical significance, demonstrated by the proportion of research addressing applied issues; (b) the greatest strength of traditional I-O appears to be the variety and complexity of organizational research topics; and (c) each field could benefit from contact with research published in the other. [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-342-9678. E-mail address: <[email protected]> Website: <http://www.HaworthPress.com>]",
"title": ""
},
{
"docid": "2648ec04733bbe56c1740e574c2a08e8",
"text": "Most work on tweet sentiment analysis is mono-lingual and the models that are generated by machine learning strategies do not generalize across multiple languages. Cross-language sentiment analysis is usually performed through machine translation approaches that translate a given source language into the target language of choice. Machine translation is expensive and the results that are provided by theses strategies are limited by the quality of the translation that is performed. In this paper, we propose a language-agnostic translation-free method for Twitter sentiment analysis, which makes use of deep convolutional neural networks with character-level embeddings for pointing to the proper polarity of tweets that may be written in distinct (or multiple) languages. The proposed method is more accurate than several other deep neural architectures while requiring substantially less learnable parameters. The resulting model is capable of learning latent features from all languages that are employed during the training process in a straightforward fashion and it does not require any translation process to be performed whatsoever. We empirically evaluate the efficiency and effectiveness of the proposed approach in tweet corpora based on tweets from four different languages, showing that our approach comfortably outperforms the baselines. Moreover, we visualize the knowledge that is learned by our method to qualitatively validate its effectiveness for tweet sentiment classification.",
"title": ""
},
{
"docid": "7ec93b17c88d09f8a442dd32127671d8",
"text": "Understanding the 3D structure of a scene is of vital importance, when it comes to developing fully autonomous robots. To this end, we present a novel deep learning based framework that estimates depth, surface normals and surface curvature by only using a single RGB image. To the best of our knowledge this is the first work to estimate surface curvature from colour using a machine learning approach. Additionally, we demonstrate that by tuning the network to infer well designed features, such as surface curvature, we can achieve improved performance at estimating depth and normals. This indicates that network guidance is still a useful aspect of designing and training a neural network. We run extensive experiments where the network is trained to infer different tasks while the model capacity is kept constant resulting in different feature maps based on the tasks at hand. We outperform the previous state-of-the-art benchmarks which jointly estimate depths and surface normals while predicting surface curvature in parallel.",
"title": ""
},
{
"docid": "7297a6317a3fc515d2d46943a2792c69",
"text": "The present work elaborates the process design methodology for the evaluation of the distillation systems based on the economic, exergetic and environmental point of view, the greenhouse gas (GHG) emissions. The methodology proposes the Heat Integrated Pressure Swing Distillation Sequence (HiPSDS) is economic and reduces the GHG emissions than the conventional Extractive Distillation Sequence (EDS) and the Pressure Swing Distillation Sequence (PSDS) for the case study of isobutyl alcohol and isobutyl acetate with the solvents for EDS and with low pressure variations for PSDS and HiPSDS. The study demonstrates that the exergy analysis can predict the results of the economic and environmental evaluation associated with the process design.",
"title": ""
}
] |
scidocsrr
|
4194903a33c18bc27d20c9fdfd20ac7c
|
Image Classification Using Generative Neuro Evolution for Deep Learning
|
[
{
"docid": "31dbe722d956df236c7ebe2535fc4125",
"text": "Intelligence in nature is the product of living brains, which are themselves the product of natural evolution. Although researchers in the field of neuroevolution (NE) attempt to recapitulate this process, artificial neural networks (ANNs) so far evolved through NE algorithms do not match the distinctive capabilities of biological brains. The recently introduced hypercube-based neuroevolution of augmenting topologies (HyperNEAT) approach narrowed this gap by demonstrating that the pattern of weights across the connectivity of an ANN can be generated as a function of its geometry, thereby allowing large ANNs to be evolved for high-dimensional problems. Yet the positions and number of the neurons connected through this approach must be decided a priori by the user and, unlike in living brains, cannot change during evolution. Evolvable-substrate HyperNEAT (ES-HyperNEAT), introduced in this article, addresses this limitation by automatically deducing the node geometry from implicit information in the pattern of weights encoded by HyperNEAT, thereby avoiding the need to evolve explicit placement. This approach not only can evolve the location of every neuron in the network, but also can represent regions of varying density, which means resolution can increase holistically over evolution. ES-HyperNEAT is demonstrated through multi-task, maze navigation, and modular retina domains, revealing that the ANNs generated by this new approach assume natural properties such as neural topography and geometric regularity. Also importantly, ES-HyperNEAT's compact indirect encoding can be seeded to begin with a bias toward a desired class of ANN topographies, which facilitates the evolutionary search. The main conclusion is that ES-HyperNEAT significantly expands the scope of neural structures that evolution can discover.",
"title": ""
}
] |
[
{
"docid": "ca75dfe870d902884c8ad827001711e1",
"text": "Impact of pre-sowing exposure of seeds to static magnetic field were studied on 1 month old maize [Zea mays . var: HQPM.1] plants under field conditions. Pre-standardized magnetic field strength of 100 mT (2 h) and 200 mT (1 h), which were proven best for improving different seedling parameters under laboratory condition, were used for this study. Magnetic field treatment altered growth, superoxide radical level, antioxidant enzymes and photosynthesis. Among the different growth parameters, leaf area and root length were the most enhanced parameters (78–40%, respectively), over untreated plants. Electron paramagnetic resonance spectroscopy study showed that superoxide radical was reduced and hydroxyl radical was unaffected after magnetic field treatment. With decrease in free radical content, antioxidant enzymes like superoxide dismutase and peroxidase were also reduced by 43 and 23%, respectively, in plants that emerged from magnetically treated seeds. Measurement of Chlorophyll a fluorescence by plant efficiency analyzer showed that the potential of processing light energy through photosynthetic machinery was enhanced by magnetic field treatment. Performance index of the plant enhanced up to two-fold and phenomenological leaf model showed more active reaction centers after magnetic field treatment. Among the two field strengths used, 200 mT (1 h) was more effective in altering all these parameters. It is concluded that pre-sowing magnetic field treatment can be effectively used for improving plant growth and development under field conditions.",
"title": ""
},
{
"docid": "b67f797eecc9ff2c29d383d15426c520",
"text": "A two-stack 42-45GHz power amplifier is implemented in 45nm SOI CMOS. Transistor stacking allows increased drain biasing to increase output power. Additionally, shunt inter-stage matching is used and improves PAE by more than 6%. This amplifier exhibits 18.6dBm saturated output power, with peak power gain of 9.5dB. It occupies 0.3mm2 including pads while achieving a peak PAE of 34%. The PAE remains above 30% from 42 to 45GHz.",
"title": ""
},
{
"docid": "efae02feebc4a2efe2cf98ab4d19cd34",
"text": "User behavior on the Web changes over time. For example, the queries that people issue to search engines, and the underlying informational goals behind the queries vary over time. In this paper, we examine how to model and predict this temporal user behavior. We develop a temporal modeling framework adapted from physics and signal processing that can be used to predict time-varying user behavior using smoothing and trends. We also explore other dynamics of Web behaviors, such as the detection of periodicities and surprises. We develop a learning procedure that can be used to construct models of users' activities based on features of current and historical behaviors. The results of experiments indicate that by using our framework to predict user behavior, we can achieve significant improvements in prediction compared to baseline models that weight historical evidence the same for all queries. We also develop a novel learning algorithm that explicitly learns when to apply a given prediction model among a set of such models. Our improved temporal modeling of user behavior can be used to enhance query suggestions, crawling policies, and result ranking.",
"title": ""
},
{
"docid": "5e2ee8afe2f74c8bc30e48fb9dc6409c",
"text": "Change detection can be treated as a generative learning procedure, in which the connection between bitemporal images and the desired change map can be modeled as a generative one. In this letter, we propose an unsupervised change detection method based on generative adversarial networks (GANs), which has the ability of recovering the training data distribution from noise input. Here, the joint distribution of the two images to be detected is taken as input and an initial difference image (DI), generated by traditional change detection method such as change vector analysis, is used to provide prior knowledge for sampling the training data based on Bayesian theorem and GAN’s min–max game theory. Through the continuous adversarial learning, the shared mapping function between the training data and their corresponding image patches can be built in GAN’s generator, from which a better DI can be generated. Finally, an unsupervised clustering algorithm is used to analyze the better DI to obtain the desired binary change map. Theoretical analysis and experimental results demonstrate the effectiveness and robustness of the proposed method.",
"title": ""
},
{
"docid": "c056fa934bbf9bc6a286cd718f3a7217",
"text": "The advent of deep sub-micron technology has exacerbated reliability issues in on-chip interconnects. In particular, single event upsets, such as soft errors, and hard faults are rapidly becoming a force to be reckoned with. This spiraling trend highlights the importance of detailed analysis of these reliability hazards and the incorporation of comprehensive protection measures into all network-on-chip (NoC) designs. In this paper, we examine the impact of transient failures on the reliability of on-chip interconnects and develop comprehensive counter-measures to either prevent or recover from them. In this regard, we propose several novel schemes to remedy various kinds of soft error symptoms, while keeping area and power overhead at a minimum. Our proposed solutions are architected to fully exploit the available infrastructures in an NoC and enable versatile reuse of valuable resources. The effectiveness of the proposed techniques has been validated using a cycle-accurate simulator",
"title": ""
},
{
"docid": "ddc556ae150e165dca607e4a674583ae",
"text": "Increasing patient numbers, changing demographics and altered patient expectations have all contributed to the current problem with 'overcrowding' in emergency departments (EDs). The problem has reached crisis level in a number of countries, with significant implications for patient safety, quality of care, staff 'burnout' and patient and staff satisfaction. There is no single, clear definition of the cause of overcrowding, nor a simple means of addressing the problem. For some hospitals, the option of ambulance diversion has become a necessity, as overcrowded waiting rooms and 'bed-block' force emergency staff to turn patients away. But what are the options when ambulance diversion is not possible? Christchurch Hospital, New Zealand is a tertiary level facility with an emergency department that sees on average 65,000 patients per year. There are no other EDs to whom patients can be diverted, and so despite admission rates from the ED of up to 48%, other options need to be examined. In order to develop a series of unified responses, which acknowledge the multifactorial nature of the problem, the Emergency Department Cardiac Analogy model of ED flow, was developed. This model highlights the need to intervene at each of three key points, in order to address the issue of overcrowding and its associated problems.",
"title": ""
},
{
"docid": "405fd8fd4d08cd26605b93f75c3038ae",
"text": "Query-processing costs on large text databases are dominated by the need to retrieve and scan the inverted list of each query term. Retrieval time for inverted lists can be greatly reduced by the use of compression, but this adds to the CPU time required. Here we show that the CPU component of query response time for conjunctive Boolean queries and for informal ranked queries can be similarly reduced, at little cost in terms of storage, by the inclusion of an internal index in each compressed inverted list. This method has been applied in a retrieval system for a collection of nearly two million short documents. Our experimental results show that the self-indexing strategy adds less than 20% to the size of the compressed inverted file, which itself occupies less than 10% of the indexed text, yet can reduce processing time for Boolean queries of 5-10 terms to under one fifth of the previous cost. Similarly, ranked queries of 40-50 terms can be evaluated in as little as 25% of the previous time, with little or no loss of retrieval effectiveness.",
"title": ""
},
{
"docid": "555ad116b9b285051084423e2807a0ba",
"text": "The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors. '",
"title": ""
},
{
"docid": "42d2cdb17f23e22da74a405ccb71f09b",
"text": "Nostalgia is a psychological phenomenon we all can relate to but have a hard time to define. What characterizes the mental state of feeling nostalgia? What psychological function does it serve? Different published materials in a wide range of fields, from consumption research and sport science to clinical psychology, psychoanalysis and sociology, all have slightly different definition of this mental experience. Some claim it is a psychiatric disease giving melancholic emotions to a memory you would consider a happy one, while others state it enforces positivity in our mood. First in this paper a thorough review of the history of nostalgia is presented, then a look at the body of contemporary nostalgia research to see what it could be constituted of. Finally, we want to dig even deeper to see what is suggested by the literature in terms of triggers and functions. Some say that digitally recorded material like music and videos has a potential nostalgic component, which could trigger a reflection of the past in ways that was difficult before such inventions. Hinting towards that nostalgia as a cultural phenomenon is on a rising scene. Some authors say that odors have the strongest impact on nostalgic reverie due to activating it without too much cognitive appraisal. Cognitive neuropsychology has shed new light on a lot of human psychological phenomena‘s and even though empirical testing have been scarce in this field, it should get a fair scrutiny within this perspective as well and hopefully helping to clarify the definition of the word to ease future investigations, both scientifically speaking and in laymen‘s retro hysteria.",
"title": ""
},
{
"docid": "f7792dbc29356711c2170d5140030142",
"text": "A C-Ku band GaN monolithic microwave integrated circuit (MMIC) transmitter/receiver (T/R) frontend module with a novel RF interface structure has been successfully developed by using multilayer ceramics technology. This interface improves the insertion loss with wideband characteristics operating up to 40 GHz. The module contains a GaN power amplifier (PA) with output power higher than 10 W over 6–18 GHz and a GaN low-noise amplifier (LNA) with a gain of 15.9 dB over 3.2–20.4 GHz and noise figure (NF) of 2.3–3.7 dB over 4–18 GHz. A fabricated T/R module occupying only 12 × 30 mm2 delivers an output power of 10 W up to the Ku-band. To our knowledge, this is the first demonstration of a C-Ku band T/R frontend module using GaN MMICs with wide bandwidth, 10W output power, and small size operating up to the Ku-band.",
"title": ""
},
{
"docid": "74dda39eb1b5afaf088daa9b93ecceac",
"text": "BACKGROUND AND PURPOSE\nWe aimed to assess the prevalence of depressive symptoms among caregivers of stroke survivors and to determine which patient- or stroke-related factors are associated with and can be used to predict caregiver depression during an 18-month follow-up after stroke.\n\n\nMETHODS\nWe examined 98 caregivers of 100 consecutive patients experiencing their first ischemic stroke in Helsinki University Central Hospital. The caregivers were interviewed at the acute phase and at 6 months and 18 months. Depression was assessed with the Beck Depression Inventory. The neurological, functional, cognitive, and emotional status of the patients was assessed 5x during the follow-up with a comprehensive test battery.\n\n\nRESULTS\nA total of 30% to 33% of all caregivers were depressed during the follow-up; the rates were higher than those of the patients. At the acute phase, caregiver depression was associated with stroke severity and older age of the patient, and at 18 months the older age of the patient was associated with depression of the spouses. In later follow-up, caregiver depression was best predicted by the caregiver's depression at acute phase.\n\n\nCONCLUSIONS\nIdentifying those caregivers at highest risk for poor emotional outcome in follow-up requires not only assessment of patient-related factors but also interview of the caregiver during the early poststroke period.",
"title": ""
},
{
"docid": "369af16d8d6bcaaa22b1ef727768e5e3",
"text": "We catalogue available software solutions for non-rigid image registration to support scientists in selecting suitable tools for specific medical registration purposes. Registration tools were identified using non-systematic search in Pubmed, Web of Science, IEEE Xplore® Digital Library, Google Scholar, and through references in identified sources (n = 22). Exclusions are due to unavailability or inappropriateness. The remaining (n = 18) tools were classified by (i) access and technology, (ii) interfaces and application, (iii) living community, (iv) supported file formats, and (v) types of registration methodologies emphasizing the similarity measures implemented. Out of the 18 tools, (i) 12 are open source, 8 are released under a permissive free license, which imposes the least restrictions on the use and further development of the tool, 8 provide graphical processing unit (GPU) support; (ii) 7 are built on software platforms, 5 were developed for brain image registration; (iii) 6 are under active development but only 3 have had their last update in 2015 or 2016; (iv) 16 support the Analyze format, while 7 file formats can be read with only one of the tools; and (v) 6 provide multiple registration methods and 6 provide landmark-based registration methods. Based on open source, licensing, GPU support, active community, several file formats, algorithms, and similarity measures, the tools Elastics and Plastimatch are chosen for the platform ITK and without platform requirements, respectively. Researchers in medical image analysis already have a large choice of registration tools freely available. However, the most recently published algorithms may not be included in the tools, yet.",
"title": ""
},
{
"docid": "8f1a5420deb75a2b664ceeaae8fc03f9",
"text": "A stretchable and multiple-force-sensitive electronic fabric based on stretchable coaxial sensor electrodes is fabricated for artificial-skin application. This electronic fabric, with only one kind of sensor unit, can simultaneously map and quantify the mechanical stresses induced by normal pressure, lateral strain, and flexion.",
"title": ""
},
{
"docid": "4ed39cd28d2891d79cd6a062e5f64518",
"text": "We evaluate the applicability of a biologically-motivated algorithm to select visually-salient regions of interest in video streams for multiply-foveated video compression. Regions are selected based on a nonlinear integration of low-level visual cues, mimicking processing in primate occipital, and posterior parietal cortex. A dynamic foveation filter then blurs every frame, increasingly with distance from salient locations. Sixty-three variants of the algorithm (varying number and shape of virtual foveas, maximum blur, and saliency competition) are evaluated against an outdoor video scene, using MPEG-1 and constant-quality MPEG-4 (DivX) encoding. Additional compression radios of 1.1 to 8.5 are achieved by foveation. Two variants of the algorithm are validated against eye fixations recorded from four to six human observers on a heterogeneous collection of 50 video clips (over 45 000 frames in total). Significantly higher overlap than expected by chance is found between human and algorithmic foveations. With both variants, foveated clips are, on average, approximately half the size of unfoveated clips, for both MPEG-1 and MPEG-4. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.",
"title": ""
},
{
"docid": "45d3c305f6ab96489540819059d0521d",
"text": "Over 80% of people with social anxiety disorder (SAD) do not receive any type of treatment, despite the existence of effective evidence-based treatments. Barriers to treatment include lack of trained therapists (particularly in nonmetropolitan areas), logistical difficulties (e.g., cost, time, transportation), concerns regarding social stigma, and fear of negative evaluation from health care providers. Interventions conducted through electronic communication media, such as the Internet, have the potential to reach individuals who otherwise would not have access to evidence-based treatments. Second Life is an online virtual world that holds great promise in the widespread delivery of evidence-based treatments. We assessed the feasibility, acceptability, and initial efficacy of an acceptance-based behavior therapy in Second Life to treat adults with generalized SAD. Participants (n=14) received 12 sessions of weekly therapy and were assessed at pretreatment, midtreatment, posttreatment, and follow-up. Participants and therapists rated the treatment program as acceptable and feasible, despite frequently encountered technical difficulties. Analyses showed significant pretreatment to follow-up improvements in social anxiety symptoms, depression, disability, and quality of life, with effect sizes comparable to previously published results of studies delivering in-person cognitive behavior therapy for SAD. Implications and future directions are discussed.",
"title": ""
},
{
"docid": "7f82ff12310f74b17ba01cac60762a8c",
"text": "For worst case parameter mismatch, modest levels of unbalance are predicted through the use of minimum gate decoupling, dynamic load lines with high Q values, common source inductance or high yield screening. Each technique is evaluated in terms of current unbalance, transition energy, peak turn-off voltage and parasitic oscillations, as appropriate, for various pulse duty cycles and frequency ranges.",
"title": ""
},
{
"docid": "157f5ef02675b789df0f893311a5db72",
"text": "We present a novel spectral shading model for human skin. Our model accounts for both subsurface and surface scattering, and uses only four parameters to simulate the interaction of light with human skin. The four parameters control the amount of oil, melanin and hemoglobin in the skin, which makes it possible to match specific skin types. Using these parameters we generate custom wavelength dependent diffusion profiles for a two-layer skin model that account for subsurface scattering within the skin. These diffusion profiles are computed using convolved diffusion multipoles, enabling an accurate and rapid simulation of the subsurface scattering of light within skin. We combine the subsurface scattering simulation with a Torrance-Sparrow BRDF model to simulate the interaction of light with an oily layer at the surface of the skin. Our results demonstrate that this four parameter model makes it possible to simulate the range of natural appearance of human skin including African, Asian, and Caucasian skin types.",
"title": ""
},
{
"docid": "8decf7a6eb2f057fe622fdd3b25511ae",
"text": "FinFET has been proposed as an alternative for bulk CMOS in current and future technology nodes due to more effective channel control, reduced random dopant fluctuation, high ON/OFF current ratio, lower energy consumption, etc. Key characteristics of FinFET operating in the sub/near-threshold region are very different from those in the strong-inversion region. This paper first introduces an analytical transregional FinFET model with high accuracy in both sub- and near-threshold regimes. Next, the paper extends the well-known and widely-adopted logical effort delay calculation and optimization method to FinFET circuits operating in multiple voltage (sub/near/super-threshold) regimes. More specifically, a joint optimization of gate sizing and adaptive independent gate control is presented and solved in order to minimize the delay of FinFET circuits operating in multiple voltage regimes. Experimental results on a 32nm Predictive Technology Model for FinFET demonstrate the effectiveness of the proposed logical effort-based delay optimization framework.",
"title": ""
},
{
"docid": "cd0aa599211f4d3c6298297f021c59b3",
"text": "INTRODUCTION\nDespite today's standard procedure for staging and treating non-muscle-invasive bladder cancer by transurethral resection via a wire loop (TURBT), several other publications have dealt with a different concept of en bloc resection of bladder tumors using different energy sources.\n\n\nMATERIAL AND METHODS\nMEDLINE and the Cochrane central register were searched for the following terms: en bloc, mucosectomy, laser, resection, ablation, Neodym, Holmium, Thulium, transitional cell carcinoma.\n\n\nRESULTS\nFourteen research articles dealing with en bloc resection of non-muscle-invasive bladder cancer could be identified (modified resection loops: six, laser: six, waterjet hydrodissection: two).\n\n\nCONCLUSION\nEn bloc resection of bladder tumors >1 cm can be performed safely with very low complication rates independent of the power source. By using laser, complication rates might even be decreased, based on their good hemostatic effect and by avoiding the obturator nerve reflex. A further advantage seems to be accurate pathologic staging of en bloc tumors. Randomized controlled trials are still needed to support the assumed advantages of en bloc resection over the standard TURBT with regard to primary targets: First-time clearance of disease, accurate staging and recurrence rates.",
"title": ""
},
{
"docid": "9f53016723d5064e3790cd316399e082",
"text": "We investigated the processing effort during visual search and counting tasks using a pupil dilation measure. Search difficulty was manipulated by varying the number of distractors as well as the heterogeneity of the distractors. More difficult visual search resulted in more pupil dilation than did less difficult search. These results confirm a link between effort and increased pupil dilation. The pupil dilated more during the counting task than during target-absent search, even though the displays were identical, and the two tasks were matched for reaction time. The moment-to-moment dilation pattern during search suggests little effort in the early stages, but increasingly more effort towards response, whereas the counting task involved an increased initial effort, which was sustained throughout the trial. These patterns can be interpreted in terms of the differential memory load for item locations in each task. In an additional experiment, increasing the spatial memory requirements of the search evoked a corresponding increase in pupil dilation. These results support the view that search tasks involve some, but limited, memory for item locations, and the effort associated with this memory load increases during the trials. In contrast, counting involves a heavy locational memory component from the start.",
"title": ""
}
] |
scidocsrr
|
021a235d989467e03d929077557323b1
|
An Efficient Data Fingerprint Query Algorithm Based on Two-Leveled Bloom Filter
|
[
{
"docid": "7add673c4f72e6a7586109ac3bdab2ec",
"text": "Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this article, we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.",
"title": ""
},
{
"docid": "26b415f796b85dea5e63db9c58b6c790",
"text": "A predominant portion of Internet services, like content delivery networks, news broadcasting, blogs sharing and social networks, etc., is data centric. A significant amount of new data is generated by these services each day. To efficiently store and maintain backups for such data is a challenging task for current data storage systems. Chunking based deduplication (dedup) methods are widely used to eliminate redundant data and hence reduce the required total storage space. In this paper, we propose a novel Frequency Based Chunking (FBC) algorithm. Unlike the most popular Content-Defined Chunking (CDC) algorithm which divides the data stream randomly according to the content, FBC explicitly utilizes the chunk frequency information in the data stream to enhance the data deduplication gain especially when the metadata overhead is taken into consideration. The FBC algorithm consists of two components, a statistical chunk frequency estimation algorithm for identifying the globally appeared frequent chunks, and a two-stage chunking algorithm which uses these chunk frequencies to obtain a better chunking result. To evaluate the effectiveness of the proposed FBC algorithm, we conducted extensive experiments on heterogeneous datasets. In all experiments, the FBC algorithm persistently outperforms the CDC algorithm in terms of achieving a better dedup gain or producing much less number of chunks. Particularly, our experiments show that FBC produces 2.5 ~ 4 times less number of chunks than that of a baseline CDC which achieving the same Duplicate Elimination Ratio (DER). Another benefit of FBC over CDC is that the FBC with average chunk size greater than or equal to that of CDC achieves up to 50% higher DER than that of a CDC algorithm.",
"title": ""
},
{
"docid": "4bf6c59cdd91d60cf6802ae99d84c700",
"text": "This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block’s contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems. We have built a prototype of the system and present some preliminary performance results. The system uses magnetic disks as the storage technology, resulting in an access time for archival data that is comparable to non-archival data. The feasibility of the write-once model for storage is demonstrated using data from over a decade’s use of two Plan 9 file systems.",
"title": ""
}
] |
[
{
"docid": "b69a39dd203eb6d2a27dae650ef7e6cb",
"text": "In this paper, a high-power high-efficiency wireless-power-transfer system using the class-E operation for transmitter via inductive coupling has been designed and fabricated using the proposed design approach. The system requires no complex external control system but relies on its natural impedance response to achieve the desired power-delivery profile across a wide range of load resistances while maintaining high efficiency to prevent any heating issues. The proposed system consists of multichannels with independent gate drive to control power delivery. The fabricated system is compact and capable of 295 W of power delivery at 75.7% efficiency with forced air cooling and of 69 W of power delivery at 74.2% efficiency with convection cooling. This is the highest power and efficiency of a loosely coupled planar wireless-power-transfer system reported to date.",
"title": ""
},
{
"docid": "b0840d44b7ec95922eeed4ef71b338f9",
"text": "Decoding DNA symbols using next-generation sequencers was a major breakthrough in genomic research. Despite the many advantages of next-generation sequencers, e.g., the high-throughput sequencing rate and relatively low cost of sequencing, the assembly of the reads produced by these sequencers still remains a major challenge. In this review, we address the basic framework of next-generation genome sequence assemblers, which comprises four basic stages: preprocessing filtering, a graph construction process, a graph simplification process, and postprocessing filtering. Here we discuss them as a framework of four stages for data analysis and processing and survey variety of techniques, algorithms, and software tools used during each stage. We also discuss the challenges that face current assemblers in the next-generation environment to determine the current state-of-the-art. We recommend a layered architecture approach for constructing a general assembler that can handle the sequences generated by different sequencing platforms.",
"title": ""
},
{
"docid": "d2a89459ca4a0e003956d6fe4871bb34",
"text": "In this paper, a high-efficiency high power density LLC resonant converter with a matrix transformer is proposed. A matrix transformer can help reduce leakage inductance and the ac resistance of windings so that the flux cancellation method can then be utilized to reduce core size and loss. Synchronous rectifier (SR) devices and output capacitors are integrated into the secondary windings to eliminate termination-related winding losses, via loss and reduce leakage inductance. A 1 MHz 390 V/12 V 1 kW LLC resonant converter prototype is built to verify the proposed structure. The efficiency can reach as high as 95.4%, and the power density of the power stage is around 830 W/in3.",
"title": ""
},
{
"docid": "c07a0053f43d9e1f98bb15d4af92a659",
"text": "We present a zero-shot learning approach for text classification, predicting which natural language understanding domain can handle a given utterance. Our approach can predict domains at runtime that did not exist at training time. We achieve this extensibility by learning to project utterances and domains into the same embedding space while generating each domain-specific embedding from a set of attributes that characterize the domain. Our model is a neural network trained via ranking loss. We evaluate the performance of this zero-shot approach on a subset of a virtual assistant’s third-party domains and show the effectiveness of the technique on new domains not observed during training. We compare to generative baselines and show that our approach requires less storage and performs better on new domains.",
"title": ""
},
{
"docid": "2e73406dd4ebd7ba90c9c20a142a9684",
"text": "Blocking characteristics of diamond junction field-effect transistors were evaluated at room temperature (RT) and 200 <sup>°</sup>C. A high source-drain bias (breakdown voltage) of 566 V was recorded at RT, whereas it increased to 608 V at 200 <sup>°</sup>C. The positive temperature coefficient of the breakdown voltage indicates the avalanche breakdown of the device. We found that the breakdown occurred at the drain edge of the p-n junction between p-channel and the n<sup>+</sup>-gates. All four devices measured in this letter showed a maximum gate-drain bias over 500 V at RT and 600 V at 200 <sup>°</sup>C.",
"title": ""
},
{
"docid": "8ec6132195c10eedfa3e2ffa70d271b5",
"text": "Devise metrics. Scientists, social scientists and economists need to design a set of practical indices for tracking progress on each SDG. Ensuring access to sustainable and modern energy for all (goal 7), for example, will require indicators of improvements in energy efficiency and carbon savings from renewable-energy technologies (see go.nature.com/pkij7y). Parameters other than just economic growth must be included, such as income inequality, carbon emissions, population and lifespans 1 .",
"title": ""
},
{
"docid": "7280754ec81098fe38023efcb25871ba",
"text": "In this paper, we present a complete framework to inverse render faces with a 3D Morphable Model (3DMM). By decomposing the image formation process into geometric and photometric parts, we are able to state the problem as a multilinear system which can be solved accurately and efficiently. As we treat each contribution as independent, the objective function is convex in the parameters and a global solution is guaranteed. We start by recovering 3D shape using a novel algorithm which incorporates generalization error of the model obtained from empirical measurements. We then describe two methods to recover facial texture, diffuse lighting, specular reflectance, and camera properties from a single image. The methods make increasingly weak assumptions and can be solved in a linear fashion. We evaluate our findings on a publicly available database, where we are able to outperform an existing state-of-the-art algorithm. We demonstrate the usability of the recovered parameters in a recognition experiment conducted on the CMU-PIE database.",
"title": ""
},
{
"docid": "463543546eeca427eb348df6c019c986",
"text": "Blockchains have recently generated explosive interest from both academia and industry, with many proposed applications. But descriptions of many these proposals are more visionary projections than realizable proposals, and even basic definitions are often missing. We define “blockchain” and “blockchain network”, and then discuss two very different, well known classes of blockchain networks: cryptocurrencies and Git repositories. We identify common primitive elements of both and use them to construct a framework for explicitly articulating what characterizes blockchain networks. The framework consists of a set of questions that every blockchain initiative should address at the very outset. It is intended to help one decide whether or not blockchain is an appropriate approach to a particular application, and if it is, to assist in its initial design stage.",
"title": ""
},
{
"docid": "299e7f7d1c48d4a6a22c88dcf422f7a1",
"text": "Due to the advantages of deep learning, in this paper, a regularized deep feature extraction (FE) method is presented for hyperspectral image (HSI) classification using a convolutional neural network (CNN). The proposed approach employs several convolutional and pooling layers to extract deep features from HSIs, which are nonlinear, discriminant, and invariant. These features are useful for image classification and target detection. Furthermore, in order to address the common issue of imbalance between high dimensionality and limited availability of training samples for the classification of HSI, a few strategies such as L2 regularization and dropout are investigated to avoid overfitting in class data modeling. More importantly, we propose a 3-D CNN-based FE model with combined regularization to extract effective spectral-spatial features of hyperspectral imagery. Finally, in order to further improve the performance, a virtual sample enhanced method is proposed. The proposed approaches are carried out on three widely used hyperspectral data sets: Indian Pines, University of Pavia, and Kennedy Space Center. The obtained results reveal that the proposed models with sparse constraints provide competitive results to state-of-the-art methods. In addition, the proposed deep FE opens a new window for further research.",
"title": ""
},
{
"docid": "753b5f366412b0d6b088baf6cbf154a2",
"text": "We introduce a highly structured family of hard satisfiable 3-SAT formulas corresponding to an ordered spin-glass model from statistical physics. This model has provably “glassy” behavior; that is, it has many local optima with large energy barriers between them, so that local search algorithms get stuck and have difficulty finding the true ground state, i.e., the unique satisfying assignment. We test the hardness of our formulas with two complete Davis-Putnam solvers, Satz and zChaff, and two incomplete solvers, WalkSAT and the recently introduced Survey Propagation algorithm SP. We compare our formulas to random XOR-SAT formulas and to two other generators of hard satisfiable instances, the minimum disagreement parity formulas of Crawford et al., and Hirsch’s hgen2. For the complete solvers the running time of our formulas grows exponentially in √ n, and exceeds that of random XOR-SAT formulas for small problem sizes. More interestingly, our formulas appear to be harder for WalkSAT than any other known generator of satisfiable instances.",
"title": ""
},
{
"docid": "cac3a510f876ed255ff87f2c0db2ed8e",
"text": "The resurgence of cancer immunotherapy stems from an improved understanding of the tumor microenvironment. The PD-1/PD-L1 axis is of particular interest, in light of promising data demonstrating a restoration of host immunity against tumors, with the prospect of durable remissions. Indeed, remarkable clinical responses have been seen in several different malignancies including, but not limited to, melanoma, lung, kidney, and bladder cancers. Even so, determining which patients derive benefit from PD-1/PD-L1-directed immunotherapy remains an important clinical question, particularly in light of the autoimmune toxicity of these agents. The use of PD-L1 (B7-H1) immunohistochemistry (IHC) as a predictive biomarker is confounded by multiple unresolved issues: variable detection antibodies, differing IHC cutoffs, tissue preparation, processing variability, primary versus metastatic biopsies, oncogenic versus induced PD-L1 expression, and staining of tumor versus immune cells. Emerging data suggest that patients whose tumors overexpress PD-L1 by IHC have improved clinical outcomes with anti-PD-1-directed therapy, but the presence of robust responses in some patients with low levels of expression of these markers complicates the issue of PD-L1 as an exclusionary predictive biomarker. An improved understanding of the host immune system and tumor microenvironment will better elucidate which patients derive benefit from these promising agents.",
"title": ""
},
{
"docid": "06a1d19d18e1f23cd252c34b8b9aa0ec",
"text": "To solve crimes, investigators often rely on interviews with witnesses, victims, or criminals themselves. The interviews are transcribed and the pertinent data is contained in narrative form. To solve one crime, investigators may need to interview multiple people and then analyze the narrative reports. There are several difficulties with this process: interviewing people is time consuming, the interviews - sometimes conducted by multiple officers - need to be combined, and the resulting information may still be incomplete. For example, victims or witnesses are often too scared or embarrassed to report or prefer to remain anonymous. We are developing an online reporting system that combines natural language processing with insights from the cognitive interview approach to obtain more information from witnesses and victims. We report here on information extraction from police and witness narratives. We achieved high precision, 94% and 96% and recall, 85% and 90%, for both narrative types.",
"title": ""
},
{
"docid": "898b5800e6ff8a599f6a4ec27310f89a",
"text": "Jenni Anttonen: Using the EMFi chair to measure the user's emotion-related heart rate responses Master's thesis, 55 pages, 2 appendix pages May 2005 The research reported here is part of a multidisciplinary collaborative project that aimed at developing embedded measurement devices using electromechanical film (EMFi) as a basic measurement technology. The present aim was to test if an unobtrusive heart rate measurement device, the EMFi chair, had the potential to detect heart rate changes associated with emotional stimulation. Six-second long visual, auditory, and audiovisual stimuli with negative, neutral, and positive emotional content were presented to 24 participants. Heart rate responses were measured with the EMFi chair and with earlobe photoplethysmography (PPG). Also, subjective ratings of the stimuli were collected. Firstly, the high correlation between the measurement results of the EMFi chair and PPG, r = 0.99, p < 0.001, indicated that the EMFi chair measured heart rate reliably. Secondly, heart rate showed a decelerating response to visual, auditory, and audiovisual emotional stimulation. The emotional stimulation caused statistically significant changes in heart rate at the 6 th second from stimulus onset so that the responses to negative stimulation were significantly lower than the responses to positive stimulation. The results were in line with previous research. The results show that heart rate responses measured with the EMFi chair differed significantly for positive and negative emotional stimulation. These results suggest that the EMFi chair could be used in HCI to measure the user's emotional responses unobtrusively.",
"title": ""
},
{
"docid": "8750fc51d19bbf0cbae2830638f492fd",
"text": "Smartphones are increasingly becoming an ordinary part of our daily lives. With their remarkable capacity, applications used in these devices are extremely varied. In terms of language teaching, the use of these applications has opened new windows of opportunity, innovatively shaping the way instructors teach and students learn. This 4 week-long study aimed to investigate the effectiveness of a mobile application on teaching 40 figurative idioms from the Michigan Corpus of Academic Spoken English (MICASE) corpus compared to traditional activities. Quasi-experimental research design with pretest and posttest was employed to determine the differences between the scores of the control (n=25) and the experimental group (n=25) formed with convenience sampling. Results indicate that participants in the experimental group performed significantly better in the posttest, demonstrating the effectiveness of the mobile application used in this study on learning idioms. The study also provides recommendations towards the use of mobile applications in teaching vocabulary.",
"title": ""
},
{
"docid": "da48aae7960f0871c91d4c6c9f5f44bf",
"text": "It is often difficult to ground text to precise time intervals due to the inherent uncertainty arising from either missing or multiple expressions at year, month, and day time granularities. We address the problem of estimating an excerpt-time model capturing the temporal scope of a given news article excerpt as a probability distribution over chronons. For this, we propose a semi-supervised distribution propagation framework that leverages redundancy in the data to improve the quality of estimated time models. Our method generates an event graph with excerpts as nodes and models various inter-excerpt relations as edges. It then propagates empirical excerpt-time models estimated for temporally annotated excerpts, to those that are strongly related but miss annotations. In our experiments, we first generate a test query set by randomly sampling 100 Wikipedia events as queries. For each query, making use of a standard text retrieval model, we then obtain top-10 documents with an average of 150 excerpts. From these, each temporally annotated excerpt is considered as gold standard. The evaluation measures are first computed for each gold standard excerpt for a single query, by comparing the estimated model with our method to the empirical model from the original expressions. Final scores are reported by averaging over all the test queries. Experiments on the English Gigaword corpus show that our method estimates significantly better time models than several baselines taken from the literature.",
"title": ""
},
{
"docid": "1062f37de56db35202f8979a7ea88efd",
"text": "This paper attempts to evaluate the anti-inflammatory potential and the possible mechanism of action of the leaf extracts and isolated compound(s) of Aerva sanguinolenta (Amaranthaceae), traditionally used in ailments related to inflammation. The anti-inflammatory activity of ethanol extract (ASE) was evaluated by acute, subacute and chronic models of inflammation, while a new cerebroside (‘trans’, ASE-1), isolated from the bioactive ASE and characterized spectroscopically, was tested by carrageenan-induced mouse paw oedema and protein exudation model. To understand the underlying mechanism, we measured the release of pro-inflammatory mediators such as nitric oxide (NO) and prostaglandin (PG)E2, along with the cytokines like tumour necrosis factor (TNF)-α, and interleukins(IL)-1β, IL-6 and IL-12 from lipopolysaccharide (LPS)-stimulated peritoneal macrophages. The results revealed that ASE at 400 mg/kg caused significant reduction of rat paw oedema, granuloma and exudative inflammation, while the inhibition of mouse paw oedema and exudative inflammation by ASE-1 (20 mg/kg) was comparable to that of the standard drug indomethacin (10 mg/kg). Interestingly, both ASE and ASE-1 showed significant inhibition of the expressions of iNOS2 and COX-2, and the down-regulation of the expressions of IL-1β, IL-6, IL-12 and TNF-α, in LPS-stimulated macrophages, via the inhibition of COX-2-mediated PGE2 release. Thus, our results validated the traditional use of A. sanguinolenta leaves in inflammation management.",
"title": ""
},
{
"docid": "4add7de7ed94bc100de8119ebd74967e",
"text": "Wireless signal strength is susceptible to the phenomena of interference, jumping, and instability, which often appear in the positioning results based on Wi-Fi field strength fingerprint database technology for indoor positioning. Therefore, a Wi-Fi and PDR (pedestrian dead reckoning) real-time fusion scheme is proposed in this paper to perform fusing calculation by adaptively determining the dynamic noise of a filtering system according to pedestrian movement (straight or turning), which can effectively restrain the jumping or accumulation phenomena of wireless positioning and the PDR error accumulation problem. Wi-Fi fingerprint matching typically requires a quite high computational burden: To reduce the computational complexity of this step, the affinity propagation clustering algorithm is adopted to cluster the fingerprint database and integrate the information of the position domain and signal domain of respective points. An experiment performed in a fourth-floor corridor at the School of Environment and Spatial Informatics, China University of Mining and Technology, shows that the traverse points of the clustered positioning system decrease by 65%–80%, which greatly improves the time efficiency. In terms of positioning accuracy, the average error is 4.09 m through the Wi-Fi positioning method. However, the positioning error can be reduced to 2.32 m after integration of the PDR algorithm with the adaptive noise extended Kalman filter (EKF).",
"title": ""
},
{
"docid": "df29784edea11d395547ca23830f2f62",
"text": "The clinical efficacy of current antidepressant therapies is unsatisfactory; antidepressants induce a variety of unwanted effects, and, moreover, their therapeutic mechanism is not clearly understood. Thus, a search for better and safer agents is continuously in progress. Recently, studies have demonstrated that zinc and magnesium possess antidepressant properties. Zinc and magnesium exhibit antidepressant-like activity in a variety of tests and models in laboratory animals. They are active in forced swim and tail suspension tests in mice and rats, and, furthermore, they enhance the activity of conventional antidepressants (e.g., imipramine and citalopram). Zinc demonstrates activity in the olfactory bulbectomy, chronic mild and chronic unpredictable stress models in rats, while magnesium is active in stress-induced depression-like behavior in mice. Clinical studies demonstrate that the efficacy of pharmacotherapy is enhanced by supplementation with zinc and magnesium. The antidepressant mechanisms of zinc and magnesium are discussed in the context of glutamate, brain-derived neurotrophic factor (BDNF) and glycogen synthase kinase-3 (GSK-3) hypotheses. All the available data indicate the importance of zinc and magnesium homeostasis in the psychopathology and therapy of affective disorders.",
"title": ""
},
{
"docid": "feef714b024ad00086a5303a8b74b0a4",
"text": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present STN-OCR, a step towards semi-supervised neural networks for scene text recognition that can be optimized end-to-end. In contrast to most existing works that consist of multiple deep neural networks and several pre-processing steps we propose to use a single deep neural network that learns to detect and recognize text from natural images in a semi-supervised way. STN-OCR is a network that integrates and jointly learns a spatial transformer network [16], that can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We investigate how our model behaves on a range of different tasks (detection and recognition of characters, and lines of text). Experimental results on public benchmark datasets show the ability of our model to handle a variety of different tasks, without substantial changes in its overall network structure.",
"title": ""
},
{
"docid": "332d517d07187d2403a672b08365e5ef",
"text": "Please cite this article in press as: C. Galleguillos doi:10.1016/j.cviu.2010.02.004 The goal of object categorization is to locate and identify instances of an object category within an image. Recognizing an object in an image is difficult when images include occlusion, poor quality, noise or background clutter, and this task becomes even more challenging when many objects are present in the same scene. Several models for object categorization use appearance and context information from objects to improve recognition accuracy. Appearance information, based on visual cues, can successfully identify object classes up to a certain extent. Context information, based on the interaction among objects in the scene or global scene statistics, can help successfully disambiguate appearance inputs in recognition tasks. In this work we address the problem of incorporating different types of contextual information for robust object categorization in computer vision. We review different ways of using contextual information in the field of object categorization, considering the most common levels of extraction of context and the different levels of contextual interactions. We also examine common machine learning models that integrate context information into object recognition frameworks and discuss scalability, optimizations and possible future approaches. 2010 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
fb12b45b035245d2d504113f04709f1c
|
Anonymity of Bitcoin Transactions An Analysis of Mixing Services
|
[
{
"docid": "bc8b40babfc2f16144cdb75b749e3a90",
"text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.",
"title": ""
}
] |
[
{
"docid": "dff035a6e773301bd13cd0b71d874861",
"text": "Over the last few years, with the immense popularity of the Kinect, there has been renewed interest in developing methods for human gesture and action recognition from 3D skeletal data. A number of approaches have been proposed to extract representative features from 3D skeletal data, most commonly hard wired geometric or bio-inspired shape context features. We propose a hierarchial dynamic framework that first extracts high level skeletal joints features and then uses the learned representation for estimating emission probability to infer action sequences. Currently gaussian mixture models are the dominant technique for modeling the emission distribution of hidden Markov models. We show that better action recognition using skeletal features can be achieved by replacing gaussian mixture models by deep neural networks that contain many layers of features to predict probability distributions over states of hidden Markov models. The framework can be easily extended to include a ergodic state to segment and recognize actions simultaneously.",
"title": ""
},
{
"docid": "b4622c9a168cd6e6f852bcc640afb4b3",
"text": "New developments in osteotomy techniques and methods of fixation have caused a revival of interest of osteotomies around the knee. The current consensus on the indications, patient selection and the factors influencing the outcome after high tibial osteotomy is presented. This paper highlights recent research aimed at joint pressure redistribution, fixation stability and bone healing that has led to improved surgical techniques and a decrease of post-operative time to full weight-bearing.",
"title": ""
},
{
"docid": "d5c72dd4b660376b122bbe71005335d7",
"text": "The effect of television violence on boys' aggression was investigated with consideration of teacher-rated characteristic aggressiveness, timing of frustration, and violence-related cues as moderators. Boys in Grades 2 and 3 (N = 396) watched violent or nonviolent TV in groups of 6, and half the groups were later exposed to a cue associated with the violent TV program. They were frustrated either before or after TV viewing. Aggression was measured by naturalistic observation during a game of floor hockey. Groups containing more characteristically high-aggressive boys showed higher aggression following violent TV plus the cue than following violent TV alone, which in turn produced more aggression than did the nonviolent TV condition. There was evidence that both the violent content and the cue may have suppressed aggression among groups composed primarily of boys low in characteristic aggressiveness. Results were interpreted in terms of current information-processing theories of media effects on aggression.",
"title": ""
},
{
"docid": "78d1a0f7a66d3533b1a00d865eeb6abd",
"text": "Motivated by a real-life problem of sharing social network data that contain sensitive personal information, we propose a novel approach to release and analyze synthetic graphs in order to protect privacy of individual relationships captured by the social network while maintaining the validity of statistical results. A case study using a version of the Enron e-mail corpus dataset demonstrates the application and usefulness of the proposed techniques in solving the challenging problem of maintaining privacy and supporting open access to network data to ensure reproducibility of existing studies and discovering new scientific insights that can be obtained by analyzing such data. We use a simple yet effective randomized response mechanism to generate synthetic networks under -edge differential privacy, and then use likelihood based inference for missing data and Markov chain Monte Carlo techniques to fit exponential-family random graph models to the generated synthetic networks.",
"title": ""
},
{
"docid": "8d25383c229a3a585d54ac71e2f22fb4",
"text": "This study aimed to determine the effects of a flipped classroom (i.e., reversal of time allotment for lecture and homework) and innovative learning activities on academic success and the satisfaction of nursing students. A quasi-experimental design was used to compare three approaches to learning: traditional lecture only (LO), lecture and lecture capture back-up (LLC), and the flipped classroom approach of lecture capture with innovative classroom activities (LCI). Examination scores were higher for the flipped classroom LCI group (M = 81.89, SD = 5.02) than for both the LLC group (M = 80.70, SD = 4.25), p = 0.003, and the LO group (M = 79.79, SD = 4.51), p < 0.001. Students were less satisfied with the flipped classroom method than with either of the other methods (p < 0.001). Blending new teaching technologies with interactive classroom activities can result in improved learning but not necessarily improved student satisfaction.",
"title": ""
},
{
"docid": "641610a41dc50be68cc570068bd6d451",
"text": "Preschool children (N = 107) were divided into 4 groups on the basis of maternal report; home and shelter groups exposed to verbal and physical conflict, a home group exposed to verbal conflict only, and a home control group. Parental ratings of behavior problems and competencies and children's self-report data were collected. Results show that verbal conflict only was associated with a moderate level of conduct problems: verbal plus physical conflict was associated with clinical levels of conduct problems and moderate levels of emotional problems; and verbal plus physical conflict plus shelter residence was associated with clinical levels of conduct problems, higher level of emotional problems, and lower levels of social functioning and perceived maternal acceptance. Findings suggests a direct relationship between the nature of the conflict and residence and type and extent of adjustment problems.",
"title": ""
},
{
"docid": "1adabe21b99d7b26851d78c9a607b01d",
"text": "Text Summarization is a way to produce a text, which contains the significant portion of information of the original text(s). Different methodologies are developed till now depending upon several parameters to find the summary as the position, format and type of the sentences in an input text, formats of different words, frequency of a particular word in a text etc. But according to different languages and input sources, these parameters are varied. As result the performance of the algorithm is greatly affected. The proposed approach summarizes a text without depending upon those parameters. Here, the relevance of the sentences within the text is derived by Simplified Lesk algorithm and WordNet, an online dictionary. This approach is not only independent of the format of the text and position of a sentence in a text, as the sentences are arranged at first according to their relevance before the summarization process, the percentage of summarization can be varied according to needs. The proposed approach gives around 80% accurate results on 50% summarization of the original text with respect to the manually summarized result, performed on 50 different types and lengths of texts. We have achieved satisfactory results even upto 25% summarization of the original text.",
"title": ""
},
{
"docid": "224cb33193938d5bfb8d604a86d3641a",
"text": "We show how machine vision, learning, and planning can be combined to solve hierarchical consensus tasks. Hierarchical consensus tasks seek correct answers to a hierarchy of subtasks, where branching depends on answers at preceding levels of the hierarchy. We construct a set of hierarchical classification models that aggregate machine and human effort on different subtasks and use these inferences in planning. Optimal solution of hierarchical tasks is intractable due to the branching of task hierarchy and the long horizon of these tasks. We study Monte Carlo planning procedures that can exploit task structure to constrain the policy space for tractability. We evaluate the procedures on data collected from Galaxy Zoo II in allocating human effort and show that significant gains can be achieved.",
"title": ""
},
{
"docid": "1d356c920fb720252d827164752dffe5",
"text": "In the early days of machine learning, Donald Michie introduced two orthogonal dimensions to evaluate performance of machine learning approaches – predictive accuracy and comprehensibility of the learned hypotheses. Later definitions narrowed the focus to measures of accuracy. As a consequence, statistical/neuronal approaches have been favoured over symbolic approaches to machine learning, such as inductive logic programming (ILP). Recently, the importance of comprehensibility has been rediscovered under the slogan ‘explainable AI’. This is due to the growing interest in black-box deep learning approaches in many application domains where it is crucial that system decisions are transparent and comprehensible and in consequence trustworthy. I will give a short history of machine learning research followed by a presentation of two specific approaches of symbolic machine learning – inductive logic programming and end-user programming. Furthermore, I will present current work on explanation generation. Die Arbeitsweise der Algorithmen, die über uns entscheiden, muss transparent gemacht werden, und wir müssen die Möglichkeit bekommen, die Algorithmen zu beeinflussen. Dazu ist es unbedingt notwendig, dass die Algorithmen ihre Entscheidung begründen! Peter Arbeitsloser zu John of Us, Qualityland, Marc-Uwe Kling, 2017",
"title": ""
},
{
"docid": "2382ab2b71be5dfbd1ba9fb4bf6536fc",
"text": "A full-bridge converter which employs a coupled inductor to achieve zero-voltage switching of the primary switches in the entire line and load range is described. Because the coupled inductor does not appear as a series inductance in the load current path, it does not cause a loss of duty cycle or severe voltage ringing across the output rectifier. The operation and performance of the proposed converter is verified on a 670-W prototype.",
"title": ""
},
{
"docid": "295ec5187615caec8b904c81015f4999",
"text": "As modern 64-bit x86 processors no longer support the segmentation capabilities of their 32-bit predecessors, most research projects assume that strong in-process memory isolation is no longer an affordable option. Instead of strong, deterministic isolation, new defense systems therefore rely on the probabilistic pseudo-isolation provided by randomization to \"hide\" sensitive (or safe) regions. However, recent attacks have shown that such protection is insufficient; attackers can leak these safe regions in a variety of ways.\n In this paper, we revisit isolation for x86-64 and argue that hardware features enabling efficient deterministic isolation do exist. We first present a comprehensive study on commodity hardware features that can be repurposed to isolate safe regions in the same address space (e.g., Intel MPX and MPK). We then introduce MemSentry, a framework to harden modern defense systems with commodity hardware features instead of information hiding. Our results show that some hardware features are more effective than others in hardening such defenses in each scenario and that features originally conceived for other purposes (e.g., Intel MPX for bounds checking) are surprisingly efficient at isolating safe regions compared to their software equivalent (i.e., SFI).",
"title": ""
},
{
"docid": "33fe68214ea062f2cdb310a74a9d6d8b",
"text": "In this study, the authors examine the relationship between abusive supervision and employee workplace deviance. The authors conceptualize abusive supervision as a type of aggression. They use work on retaliation and direct and displaced aggression as a foundation for examining employees' reactions to abusive supervision. The authors predict abusive supervision will be related to supervisor-directed deviance, organizational deviance, and interpersonal deviance. Additionally, the authors examine the moderating effects of negative reciprocity beliefs. They hypothesized that the relationship between abusive supervision and supervisor-directed deviance would be stronger when individuals hold higher negative reciprocity beliefs. The results support this hypothesis. The implications of the results for understanding destructive behaviors in the workplace are examined.",
"title": ""
},
{
"docid": "18acdeb37257f2f7f10a5baa8957a257",
"text": "Time-memory trade-off methods provide means to invert one way functions. Such attacks offer a flexible trade-off between running time and memory cost in accordance to users' computational resources. In particular, they can be applied to hash values of passwords in order to recover the plaintext. They were introduced by Martin Hellman and later improved by Philippe Oechslin with the introduction of rainbow tables. The drawbacks of rainbow tables are that they do not always guarantee a successful inversion. We address this issue in this paper. In the context of passwords, it is pertinent that frequently used passwords are incorporated in the rainbow table. It has been known that up to 4 given passwords can be incorporated into a chain but it is an open problem if more than 4 passwords can be achieved. We solve this problem by showing that it is possible to incorporate more of such passwords along a chain. Furthermore, we prove that this results in faster recovery of such passwords during the online running phase as opposed to assigning them at the beginning of the chains. For large chain lengths, the average improvement translates to 3 times the speed increase during the online recovery time.",
"title": ""
},
{
"docid": "2d7ff73a3fb435bd11633f650b23172e",
"text": "This study determined the effect of Tetracarpidium conophorum (black walnut) leaf extract on the male reproductive organs of albino rats. The effects of the leaf extracts were determined on the Epididymal sperm concentration, Testicular histology, and on testosterone concentration in the rat serum by a micro plate enzyme immunoassay (Testosterone assay). A total of sixteen (16) male albino wistar rats were divided into four (1, 2, 3 and 4) groups of four rats each. Group 1 served as the control and was fed with normal diet only, while groups 2, 3 and 4 were fed with 200, 400 and 600 mg/kg body weight (BW) of the extract for a period of two weeks. The Epididymal sperm concentration were not significantly affected (p>0.05) across the groups. The level of testosterone for the treatment groups 2 and 4 showed no significant difference (p>0.05) compared to the control while group 4 showed significant increase compared to that of the control (p<0.05). Pathologic changes were observed in testicular histology across the treatment groups. Robust seminiferous tubular lumen containing sperm cells and increased production of Leydig cells and Sertoli cells were observed across different treatment groups compared to that of the control.",
"title": ""
},
{
"docid": "ec4bf9499f16c415ccb586a974671bf1",
"text": "Memory circuit elements, namely memristive, memcapacitive and meminductive systems, are gaining considerable attention due to their ubiquity and use in diverse areas of science and technology. Their modeling within the most widely used environment, SPICE, is thus critical to make substantial progress in the design and analysis of complex circuits. Here, we present a collection of models of different memory circuit elements and provide a methodology for their accurate and reliable modeling in the SPICE environment. We also provide codes of these models written in the most popular SPICE versions (PSpice, LTspice, HSPICE) for the benefit of the reader. We expect this to be of great value to the growing community of scientists interested in the wide range of applications of memory circuit elements.",
"title": ""
},
{
"docid": "cff062b48160fd1551e530125a03d1f8",
"text": "In this paper, we consider a multiple-input multiple-output wireless powered communication network, where multiple users harvest energy from a dedicated power station in order to be able to transmit their information signals to an information receiving station. Employing a practical non-linear energy harvesting (EH) model, we propose a joint time allocation and power control scheme, which takes into account the uncertainty regarding the channel state information (CSI) and provides robustness against imperfect CSI knowledge. In particular, we formulate two non-convex optimization problems for different objectives, namely system sum throughput maximization and the maximization of the minimum individual throughput across all wireless powered users. To overcome the non-convexity, we apply several transformations along with a one-dimensional search to obtain an efficient resource allocation algorithm. Numerical results reveal that a significant performance gain can be achieved when the resource allocation is designed based on the adopted non-linear EH model instead of the conventional linear EH model. Besides, unlike a non-robust baseline scheme designed for perfect CSI, the proposed resource allocation schemes are shown to be robust against imperfect CSI knowledge.",
"title": ""
},
{
"docid": "acf6a62e487b79fc0500aa5e6bbb0b0b",
"text": "This paper proposes a low-cost, easily realizable strategy to equip a reinforcement learning (RL) agent the capability of behaving ethically. Our model allows the designers of RL agents to solely focus on the task to achieve, without having to worry about the implementation of multiple trivial ethical patterns to follow. Based on the assumption that the majority of human behavior, regardless which goals they are achieving, is ethical, our design integrates human policy with the RL policy to achieve the target objective with less chance of violating the ethical code that human beings normally obey.",
"title": ""
},
{
"docid": "1ffc6db796b8e8a03165676c1bc48145",
"text": "UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. e result is a practical scalable algorithm that applies to real world data. e UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.",
"title": ""
},
{
"docid": "f3a8e58eec0f243ae9fdfae78f75657d",
"text": "This paper studies the decentralized coded caching for a Fog Radio Access Network (F-RAN), whereby two edge-nodes (ENs) connected to a cloud server via fronthaul links with limited capacity are serving the requests of K r users. We consider all ENs and users are equipped with caches. A decentralized content placement is proposed to independently store contents at each network node during the off-peak hours. After that, we design a coded delivery scheme in order to deliver the user demands during the peak-hours under the objective of minimizing the normalized delivery time (NDT), which refers to the worst case delivery latency. An information-theoretic lower bound on the minimum NDT is derived for arbitrary number of ENs and users. We evaluate numerically the performance of the decentralized scheme. Additionally, we prove the approximate optimality of the decentralized scheme for a special case when the caches are only available at the ENs.",
"title": ""
}
] |
scidocsrr
|
8f3ae7840f3b78d037caf594fdc690f3
|
Structuring Cooperative Behavior Planning Implementations for Automated Driving
|
[
{
"docid": "045a56e333b1fe78677b8f4cc4c20ecc",
"text": "Swarm robotics is an approach to collective robotics that takes inspiration from the self-organized behaviors of social animals. Through simple rules and local interactions, swarm robotics aims at designing robust, scalable, and flexible collective behaviors for the coordination of large numbers of robots. In this paper, we analyze the literature from the point of view of swarm engineering: we focus mainly on ideas and concepts that contribute to the advancement of swarm robotics as an engineering field and that could be relevant to tackle real-world applications. Swarm engineering is an emerging discipline that aims at defining systematic and well founded procedures for modeling, designing, realizing, verifying, validating, operating, and maintaining a swarm robotics system. We propose two taxonomies: in the first taxonomy, we classify works that deal with design and analysis methods; in the second taxonomy, we classify works according to the collective behavior studied. We conclude with a discussion of the current limits of swarm robotics as an engineering discipline and with suggestions for future research directions.",
"title": ""
},
{
"docid": "fc2c995d20c83a72ea46f5055d1847a1",
"text": "In this paper, we present a novel probabilistic compact representation of the on-road environment, i.e., the dynamic probabilistic drivability map (DPDM), and demonstrate its utility for predictive lane change and merge (LCM) driver assistance during highway and urban driving. The DPDM is a flexible representation and readily accepts data from a variety of sensor modalities to represent the on-road environment as a spatially coded data structure, encapsulating spatial, dynamic, and legal information. Using the DPDM, we develop a general predictive system for LCMs. We formulate the LCM assistance system to solve for the minimum-cost solution to merge or change lanes, which is solved efficiently using dynamic programming over the DPDM. Based on the DPDM, the LCM system recommends the required acceleration and timing to safely merge or change lanes with minimum cost. System performance has been extensively validated using real-world on-road data, including urban driving, on-ramp merges, and both dense and free-flow highway conditions.",
"title": ""
}
] |
[
{
"docid": "da1cecae4f925f331fda67c784e6635d",
"text": "This paper surveys recent literature on vehicular social networks that are a particular class of vehicular ad hoc networks, characterized by social aspects and features. Starting from this pillar, we investigate perspectives on next-generation vehicles under the assumption of social networking for vehicular applications (i.e., safety and entertainment applications). This paper plays a role as a starting point about socially inspired vehicles and mainly related applications, as well as communication techniques. Vehicular communications can be considered the “first social network for automobiles” since each driver can share data with other neighbors. For instance, heavy traffic is a common occurrence in some areas on the roads (e.g., at intersections, taxi loading/unloading areas, and so on); as a consequence, roads become a popular social place for vehicles to connect to each other. Human factors are then involved in vehicular ad hoc networks, not only due to the safety-related applications but also for entertainment purposes. Social characteristics and human behavior largely impact on vehicular ad hoc networks, and this arises to the vehicular social networks, which are formed when vehicles (individuals) “socialize” and share common interests. In this paper, we provide a survey on main features of vehicular social networks, from novel emerging technologies to social aspects used for mobile applications, as well as main issues and challenges. Vehicular social networks are described as decentralized opportunistic communication networks formed among vehicles. They exploit mobility aspects, and basics of traditional social networks, in order to create novel approaches of message exchange through the detection of dynamic social structures. An overview of the main state-of-the-art on safety and entertainment applications relying on social networking solutions is also provided.",
"title": ""
},
{
"docid": "c4ff647b5962d3d713577c16a7a9cae5",
"text": "In this paper we propose the use of an illumination invariant transform to improve many aspects of visual localisation, mapping and scene classification for autonomous road vehicles. The illumination invariant colour space stems from modelling the spectral properties of the camera and scene illumination in conjunction, and requires only a single parameter derived from the image sensor specifications. We present results using a 24-hour dataset collected using an autonomous road vehicle, demonstrating increased consistency of the illumination invariant images in comparison to raw RGB images during daylight hours. We then present three example applications of how illumination invariant imaging can improve performance in the context of vision-based autonomous vehicles: 6-DoF metric localisation using monocular cameras over a 24-hour period, life-long visual localisation and mapping using stereo, and urban scene classification in changing environments. Our ultimate goal is robust and reliable vision-based perception and navigation an attractive proposition for low-cost autonomy for road vehicles.",
"title": ""
},
{
"docid": "90a1fc43ee44634bce3658463503994e",
"text": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.",
"title": ""
},
{
"docid": "ab7d6a9c9c07ee1a60f01d4017a3a25b",
"text": "[Context and Motivation] Many a tool for finding ambiguities in natural language (NL) requirements specifications (RSs) is based on a parser and a parts-of-speech identifier, which are inherently imperfect on real NL text. Therefore, any such tool inherently has less than 100% recall. Consequently, running such a tool on a NL RS for a highly critical system does not eliminate the need for a complete manual search for ambiguity in the RS. [Question/Problem] Can an ambiguity-finding tool (AFT) be built that has 100% recall on the types of ambiguities that are in the AFT’s scope such that a manual search in an RS for ambiguities outside the AFT’s scope is significantly easier than a manual search of the RS for all ambiguities? [Principal Ideas/Results] This paper presents the design of a prototype AFT, SREE (Systemized Requirements Engineering Environment), whose goal is achieving a 100% recall rate for the ambiguities in its scope, even at the cost of a precision rate of less than 100%. The ambiguities that SREE searches for by lexical analysis are the ones whose keyword indicators are found in SREE’s ambiguity-indicator corpus that was constructed based on studies of several industrial strength RSs. SREE was run on two of these industrial strength RSs, and the time to do a completely manual search of these RSs is compared to the time to reject the false positives in SREE’s output plus the time to do a manual search of these RSs for only ambiguities not in SREE’s scope. [Contribution] SREE does not achieve its goals. However, the time comparison shows that the approach to divide ambiguity finding between an AFT with 100% recall for some types of ambiguity and a manual search for only the other types of ambiguity is promising enough to justify more work to improve the implementation of the approach. Some specific improvement suggestions are offered.",
"title": ""
},
{
"docid": "1f4376dcc726b7ac5726620d887c60c3",
"text": "Recently sparse representation has been applied to visual tracker by modeling the target appearance using a sparse approximation over a template set, which leads to the so-called L1 trackers as it needs to solve an ℓ1 norm related minimization problem for many times. While these L1 trackers showed impressive tracking accuracies, they are very computationally demanding and the speed bottleneck is the solver to ℓ1 norm minimizations. This paper aims at developing an L1 tracker that not only runs in real time but also enjoys better robustness than other L1 trackers. In our proposed L1 tracker, a new ℓ1 norm related minimization model is proposed to improve the tracking accuracy by adding an ℓ1 norm regularization on the coefficients associated with the trivial templates. Moreover, based on the accelerated proximal gradient approach, a very fast numerical solver is developed to solve the resulting ℓ1 norm related minimization problem with guaranteed quadratic convergence. The great running time efficiency and tracking accuracy of the proposed tracker is validated with a comprehensive evaluation involving eight challenging sequences and five alternative state-of-the-art trackers.",
"title": ""
},
{
"docid": "4c0869847079b11ec8e0a6b9714b2d09",
"text": "This paper provides a tutorial overview of the latest generation of passive optical network (PON) technology standards nearing completion in ITU-T. The system is termed NG-PON2 and offers a fiber capacity of 40 Gbit/s by exploiting multiple wavelengths at dense wavelength division multiplexing channel spacing and tunable transceiver technology in the subscriber terminals (ONUs). Here, the focus is on the requirements from network operators that are driving the standards developments and the technology selection prior to standardization. A prestandard view of the main physical layer optical specifications is also given, ahead of final ITU-T approval.",
"title": ""
},
{
"docid": "5236f684bc0fdf11855a439c9d3256f6",
"text": "The smart home is an environment, where heterogeneous electronic devices and appliances are networked together to provide smart services in a ubiquitous manner to the individuals. As the homes become smarter, more complex, and technology dependent, the need for an adequate security mechanism with minimum individual’s intervention is growing. The recent serious security attacks have shown how the Internet-enabled smart homes can be turned into very dangerous spots for various ill intentions, and thus lead the privacy concerns for the individuals. For instance, an eavesdropper is able to derive the identity of a particular device/appliance via public channels that can be used to infer in the life pattern of an individual within the home area network. This paper proposes an anonymous secure framework (ASF) in connected smart home environments, using solely lightweight operations. The proposed framework in this paper provides efficient authentication and key agreement, and enables devices (identity and data) anonymity and unlinkability. One-time session key progression regularly renews the session key for the smart devices and dilutes the risk of using a compromised session key in the ASF. It is demonstrated that computation complexity of the proposed framework is low as compared with the existing schemes, while security has been significantly improved.",
"title": ""
},
{
"docid": "c84d012151d270892c38d80ccd46764a",
"text": "Signal processing algorithms for near end listening enhancement allow to improve the intelligibility of clean (far end) speech for the near end listener who perceives not only the far end speech but also ambient background noise. A typical scenario is mobile communication conducted in the presence of acoustical background noise such as traffic or babble noise. In this contribution we analyze the calculation rules of the Speech Intelligibility Index (SII) and derive a simple condition for the speech spectrum level of every subband that maximizes the SII for a given noise spectrum level. This rule is used to derive a theoretical bound for a maximum achievable SII as well as a new SII optimized algorithm for near end listening enhancement. The impact of ignoring masking effects in the algorithm is also investigated and seconds our SNR recovery algorithm proposed earlier. Instrumental evaluation shows that the new algorithm performs close to the established theoretical bound.",
"title": ""
},
{
"docid": "75fa6fce044972e5b0946161a5d2281c",
"text": "The concept of a glucose-responsive insulin (GRI) has been a recent objective of diabetes technology. The idea behind the GRI is to create a therapeutic that modulates its potency, concentration or dosing relative to a patient's dynamic glucose concentration, thereby approximating aspects of a normally functioning pancreas. From the perspective of the medicinal chemist, the GRI is also important as a generalized model of a potentially new generation of therapeutics that adjust potency in response to a critical therapeutic marker. The aim of this Perspective is to highlight emerging concepts, including mathematical modelling and the molecular engineering of insulin itself and its potency, towards a viable GRI. We briefly outline some of the most important recent progress toward this goal and also provide a forward-looking viewpoint, which asks if there are new approaches that could spur innovation in this area as well as to encourage synthetic chemists and chemical engineers to address the challenges and promises offered by this therapeutic approach.",
"title": ""
},
{
"docid": "916fe1c8bac1d6d9f1df4179d6674ff2",
"text": "The UN High-Level Meeting on Non-Communicable Diseases (NCDs) in September, 2011, is an unprecedented opportunity to create a sustained global movement against premature death and preventable morbidity and disability from NCDs, mainly heart disease, stroke, cancer, diabetes, and chronic respiratory disease. The increasing global crisis in NCDs is a barrier to development goals including poverty reduction, health equity, economic stability, and human security. The Lancet NCD Action Group and the NCD Alliance propose five overarching priority actions for the response to the crisis--leadership, prevention, treatment, international cooperation, and monitoring and accountability--and the delivery of five priority interventions--tobacco control, salt reduction, improved diets and physical activity, reduction in hazardous alcohol intake, and essential drugs and technologies. The priority interventions were chosen for their health effects, cost-effectiveness, low costs of implementation, and political and financial feasibility. The most urgent and immediate priority is tobacco control. We propose as a goal for 2040, a world essentially free from tobacco where less than 5% of people use tobacco. Implementation of the priority interventions, at an estimated global commitment of about US$9 billion per year, will bring enormous benefits to social and economic development and to the health sector. If widely adopted, these interventions will achieve the global goal of reducing NCD death rates by 2% per year, averting tens of millions of premature deaths in this decade.",
"title": ""
},
{
"docid": "d2e56a45e0b901024776d36eaa5fa998",
"text": "In this paper, we present our results of automatic gesture recognition systems using different types of cameras in order to compare them in reference to their performances in segmentation. The acquired image segments provide the data for further analysis. The images of a single camera system are mostly used as input data in the research area of gesture recognition. In comparison to that, the analysis results of a stereo color camera and a thermal camera system are used to determine the advantages and disadvantages of these camera systems. On this basis, a real-time gesture recognition system is proposed to classify alphabets (A-Z) and numbers (0-9) with an average recognition rate of 98% using Hidden Markov Models (HMM).",
"title": ""
},
{
"docid": "817f9509afcdbafc60ecac2d0b8ef02d",
"text": "Abstract—In most regards, the twenty-first century may not bring revolutionary changes in electronic messaging technology in terms of applications or protocols. Security issues that have long been a concern in messaging application are finally being solved using a variety of products. Web-based messaging systems are rapidly evolving the text-based conversation. The users have the right to protect their privacy from the eavesdropper, or other parties which interferes the privacy of the users for such purpose. The chatters most probably use the instant messages to chat with others for personal issue; in which no one has the right eavesdrop the conversation channel and interfere this privacy. This is considered as a non-ethical manner and the privacy of the users should be protected. The author seeks to identify the security features for most public instant messaging services used over the internet and suggest some solutions in order to encrypt the instant messaging over the conversation channel. The aim of this research is to investigate through forensics and sniffing techniques, the possibilities of hiding communication using encryption to protect the integrity of messages exchanged. Authors used different tools and methods to run the investigations. Such tools include Wireshark packet sniffer, Forensics Tool Kit (FTK) and viaForensic mobile forensic toolkit. Finally, authors will report their findings on the level of security that encryption could provide to instant messaging services.",
"title": ""
},
{
"docid": "926db14af35f9682c28a64e855fb76e5",
"text": "This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.",
"title": ""
},
{
"docid": "e5f5aa53a90f482fb46a7f02bae27b20",
"text": "Machinima is a low-cost alternative to full production filmmaking. However, creating quality cinematic visualizations with existing machinima techniques still requires a high degree of talent and effort. We introduce a lightweight artificial intelligence system, Cambot, that can be used to assist in machinima production. Cambot takes a script as input and produces a cinematic visualization. Unlike other virtual cinematography systems, Cambot favors an offline algorithm coupled with an extensible library of specific modular and reusable facets of cinematic knowledge. One of the advantages of this approach to virtual cinematography is a tight coordination between the positions and movements of the camera and the actors.",
"title": ""
},
{
"docid": "55d08da55a64d35f3115911f3cc22e82",
"text": "Process modeling has become an essential part of many organizations for documenting, analyzing and redesigning their business operations and to support them with suitable information systems. In order to serve this purpose, it is important for process models to be well grounded in formal and precise semantics. While behavioural semantics of process models are well understood, there is a considerable gap of research into the semantic aspects of their text labels and natural language descriptions. The aim of this paper is to make this research gap more transparent. To this end, we clarify the role of textual content in process models and the challenges that are associated with the interpretation, analysis, and improvement of their natural language parts. More specifically, we discuss particular use cases of semantic process modeling to identify 25 challenges. For each challenge, we identify prior research and discuss directions for addressing them.",
"title": ""
},
{
"docid": "7660ad596801203d1c9d1635be6b90d9",
"text": "a r t i c l e i n f o This study investigates the role of dynamic capabilities in the resource-based view framework, and also explores the relationships among different resources, different dynamic capabilities and firm performance. Employing samples of top 1000 Taiwanese companies, the findings show that dynamic capabilities can mediate the firm's valuable, rare, inimitable and non-substitutable (VRIN) resources to improve performance. On the contrary, non-VRIN resources have an insignificant mediating effect. Among three types of dynamic capabilities, dynamic learning capability most effectively mediates the influence of VRIN resources on performance. Furthermore, the important role of VRIN resources is addressed because of their direct effects on performance based on RBV, as well as their indirect effect via the mediation of dynamic capabilities.",
"title": ""
},
{
"docid": "ba901f44b42820202d0e81671e7f189e",
"text": "In this paper, we present a novel method for visual loop-closure detection in autonomous robot navigation. Our method, which we refer to as bag-of-raw-features or BoRF, uses scale-invariant visual features (such as SIFT) directly, rather than their vector-quantized representation or bag-of-words (BoW), which is popular in recent studies of the problem. BoRF avoids the offline process of vocabulary construction, and does not suffer from the perceptual aliasing problem of BoW, thereby significantly improving the recall performance. To reduce the computational cost of direct feature matching, we exploit the fact that images in the case of robot navigation are acquired sequentially, and that feature matching repeatability with respect to scale can be learned and used to reduce the number of the features considered for matching. The proposed method is tested experimentally using indoor visual SLAM image sequences.",
"title": ""
},
{
"docid": "6f22283e5142035d6f6f9d5e06ab1cd2",
"text": "We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.",
"title": ""
},
{
"docid": "1af4ccae8012df51fb9c435a49aeb7d7",
"text": "The Common N-Gram (CNG) classifier is a text classification algorithm based on the comparison of frequencies of character n-grams (strings of characters of length n) that are the most common in the considered documents and classes of documents. We present a text analytic visualization system that employs the CNG approach for text classification and uses the differences in frequency values of common n-grams in order to visually compare documents at the sub-word level. The visualization method provides both an insight into n-gram characteristics of documents or classes of documents and a visual interpretation of the workings of the CNG classifier.",
"title": ""
},
{
"docid": "7d8dcb65acd5e0dc70937097ded83013",
"text": "This paper addresses the problem of mapping natural language sentences to lambda–calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. We apply the method to the task of learning natural language interfaces to databases and show that the learned parsers outperform previous methods in two benchmark database domains.",
"title": ""
}
] |
scidocsrr
|
bb37cd6a960f92b14ad7f59d859508e0
|
Deep Relative Tracking
|
[
{
"docid": "001104ca832b10553b28bbd713e6cbd5",
"text": "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.",
"title": ""
},
{
"docid": "483ab105bfe99c867690891f61bb0336",
"text": "In this paper we propose a robust object tracking algorithm using a collaborative model. As the main challenge for object tracking is to account for drastic appearance change, we propose a robust appearance model that exploits both holistic templates and local representations. We develop a sparsity-based discriminative classifier (SD-C) and a sparsity-based generative model (SGM). In the S-DC module, we introduce an effective method to compute the confidence value that assigns more weights to the foreground than the background. In the SGM module, we propose a novel histogram-based method that takes the spatial information of each patch into consideration with an occlusion handing scheme. Furthermore, the update scheme considers both the latest observations and the original template, thereby enabling the tracker to deal with appearance change effectively and alleviate the drift problem. Numerous experiments on various challenging videos demonstrate that the proposed tracker performs favorably against several state-of-the-art algorithms.",
"title": ""
}
] |
[
{
"docid": "12e088ccb86094d58c682e4071cce0a6",
"text": "Are there systematic differences between people who use social network sites and those who stay away, despite a familiarity with them? Based on data from a survey administered to a diverse group of young adults, this article looks at the predictors of SNS usage, with particular focus on Facebook, MySpace, Xanga, and Friendster. Findings suggest that use of such sites is not randomly distributed across a group of highly wired users. A person's gender, race and ethnicity, and parental educational background are all associated with use, but in most cases only when the aggregate concept of social network sites is disaggregated by service. Additionally, people with more experience and autonomy of use are more likely to be users of such sites. Unequal participation based on user background suggests that differential adoption of such services may be contributing to digital inequality.",
"title": ""
},
{
"docid": "cc4c946bc42180e51912b53e915a4e1a",
"text": "Background: Photovoltaic (PV) array which is composed of modules is considered as the fundamental power conversion unit of a PV generator system. The PV array has nonlinear characteristics and it is quite expensive and takes much time to get the operating curves of PV array under varying operating conditions. In order to overcome these obstacles, common and simple models of solar panel have been developed and integrated to many engineering software including Matlab/Simulink. However, these models are not adequate for application involving hybrid energy system since they need a flexible tuning of some parameters in the system and not easily understandable for readers to use by themselves. Therefore, this paper presents a step-by-step procedure for the simulation of PV cells/modules/ arrays with Tag tools in Matlab/Simulink. A DS-100M solar panel is used as reference model. The operation characteristics of PV array are also investigated at a wide range of operating conditions and physical parameters. Result: The output characteristics curves of the model match the characteristics of DS-100M solar panel. The output power, current and voltage decreases when the solar irradiation reduces from 1000 to 100 W/m. When the temperature decreases, the output power and voltage increases marginally whereas the output current almost keeps constant. Shunt resistance has significant effect on the operating curves of solar PV array as low power output is recorded if the value of shunt resistance varies from 1000 ohms to 0.1 ohms. Conclusion: The proposed procedure provides an accurate, reliable and easy-to-tune model of photovoltaic array. Furthermore, it also robust advantageous in investigating the solar PV array operation from different physical parameters (series, shunt resistance, ideality factor, etc.) and working condition ( varying temperature, irradiation and especially partial shadow effect) aspects.",
"title": ""
},
{
"docid": "7e1e475f5447894a6c246e7d47586c4b",
"text": "Between 1983 and 2003 forty accidental autoerotic deaths (all males, 13-79 years old) have been investigated at the Institute of Legal Medicine in Hamburg. Three cases with a rather unusual scenery are described in detail: (1) a 28-year-old fireworker was found hanging under a bridge in a peculiar bound belt system. The autopsy and the reconstruction revealed signs of asphyxiation, feminine underwear, and several layers of plastic clothing. (2) A 16-year-old pupil dressed with feminine plastic and rubber utensils fixed and strangulated himself with an electric wire. (3) A 28-year-old handicapped man suffered from progressive muscular dystrophy and was nearly unable to move. His bizarre sexual fantasies were exaggerating: he induced a nurse to draw plastic bags over his body, close his mouth with plastic strips, and put him in a rubbish container where he died from suffocation.",
"title": ""
},
{
"docid": "981a03df711c7c9aabdf163487887824",
"text": "We introduce a new paradigm to investigate unsupervised learning, reducing unsupervised learning to supervised learning. Specifically, we mitigate the subjectivity in unsupervised decision-making by leveraging knowledge acquired from prior, possibly heterogeneous, supervised learning tasks. We demonstrate the versatility of our framework via comprehensive expositions and detailed experiments on several unsupervised problems such as (a) clustering, (b) outlier detection, and (c) similarity prediction under a common umbrella of meta-unsupervised-learning. We also provide rigorous PAC-agnostic bounds to establish the theoretical foundations of our framework, and show that our framing of metaclustering circumvents Kleinberg’s impossibility theorem for clustering.",
"title": ""
},
{
"docid": "1a98d48ae733670a641c0467d962d9b4",
"text": "Translation Look aside Buffers (TLBs) are critical to system performance, particularly as applications demand larger working sets and with the adoption of virtualization. Architectural support for super pages has previously been proposed to improve TLB performance. By allocating contiguous physical pages to contiguous virtual pages, the operating system (OS) constructs super pages which need just one TLB entry rather than the hundreds required for the constituent base pages. While this greatly reduces TLB misses, these gains are often offset by the implementation difficulties of generating and managing ample contiguity for super pages. We show, however, that basic OS memory allocation mechanisms such as buddy allocators and memory compaction naturally assign contiguous physical pages to contiguous virtual pages. Our real-system experiments show that while usually insufficient for super pages, these intermediate levels of contiguity exist under various system conditions and even under high load. In response, we propose Coalesced Large-Reach TLBs (CoLT), which leverage this intermediate contiguity to coalesce multiple virtual-to-physical page translations into single TLB entries. We show that CoLT implementations eliminate 40\\% to 58\\% of TLB misses on average, improving performance by 14\\%. Overall, we demonstrate that the OS naturally generates page allocation contiguity. CoLT exploits this contiguity to eliminate TLB misses for next-generation, big-data applications with low-overhead implementations.",
"title": ""
},
{
"docid": "9963e1f7126812d9111a4cb6a8eb8dc6",
"text": "The renewed interest in grapheme to phoneme conversion (G2P), due to the need of developing multilingual speech synthesizers and recognizers, suggests new approaches more efficient than the traditional rule&exception ones. A number of studies have been performed to investigate the possible use of machine learning techniques to extract phonetic knowledge in a automatic way starting from a lexicon. In this paper, we present the results of our experiments in this research field. Starting from the state of art, our contribution is in the development of a language-independent learning scheme for G2P based on Classification and Regression Trees (CART). To validate our approach, we realized G2P converters for the following languages: British English, American English, French and Brazilian Portuguese.",
"title": ""
},
{
"docid": "a76332501ef8140176ed434b20483e3b",
"text": "As the integration level of power electronics equipment increases, the coupling between multi-domain physical effects becomes more and more relevant for design optimization. At the same time, virtual analysis capability acquires a critical importance and is conditioned by the achievement of an adequate compromise between accuracy and computational effort. This paper proposes the compact model development of a 6.5 kV field-stop IGBT module, for use in a circuit simulation environment. The model considers the realistic connection of IGBT and anti-parallel freewheeling diode pairs: the description of semiconductor physics is coupled with self-heating effects, both at device and module level; electro-magnetic phenomena associated with the package and layout are also taken into account. The modeling approach follows a mixed physical and behavioral description, resulting in an ideal compromise for realistic analysis of multi-chip structures. Finally, selected examples, derived from a railway traction application scenario, demonstrate the validity of the proposed solution, both for simulation of short transients and periodic operation, qualifying the model as a support tool of general validity, from system design development to reliability investigations.",
"title": ""
},
{
"docid": "4b8470edc0d643e9baeceae7d15a3c8b",
"text": "The authors have investigated potential applications of artificial neural networks for electrocardiographic QRS detection and beat classification. For the task of QRS detection, the authors used an adaptive multilayer perceptron structure to model the nonlinear background noise so as to enhance the QRS complex. This provided more reliable detection of QRS complexes even in a noisy environment. For electrocardiographic QRS complex pattern classification, an artificial neural network adaptive multilayer perceptron was used as a pattern classifier to distinguish between normal and abnormal beat patterns, as well as to classify 12 different abnormal beat morphologies. Preliminary results using the MIT/BIH (Massachusetts Institute of Technology/Beth Israel Hospital, Cambridge, MA) arrhythmia database are encouraging.",
"title": ""
},
{
"docid": "087f9c2abb99d8576645a2460298c1b5",
"text": "In a community cloud, multiple user groups dynamically share a massive number of data blocks. The authors present a new associative data sharing method that uses virtual disks in the MeePo cloud, a research storage cloud built at Tsinghua University. Innovations in the MeePo cloud design include big data metering, associative data sharing, data block prefetching, privileged access control (PAC), and privacy preservation. These features are improved or extended from competing features implemented in DropBox, CloudViews, and MySpace. The reported results support the effectiveness of the MeePo cloud.",
"title": ""
},
{
"docid": "5d40cae84395cc94d68bd4352383d66b",
"text": "Scalable High Efficiency Video Coding (SHVC) is the extension of the High Efficiency Video Coding (HEVC). This standard is developed to ameliorate the coding efficiency for the spatial and quality scalability. In this paper, we investigate a survey for SHVC extension. We describe also its types and explain the different additional coding tools that further improve the Enhancement Layer (EL) coding efficiency. Furthermore, we assess through experimental results the performance of the SHVC for different coding configurations. The effectiveness of the SHVC was demonstrated, using two layers, by comparing its coding adequacy compared to simulcast configuration and HEVC for enhancement layer using HM16 for several test sequences and coding conditions.",
"title": ""
},
{
"docid": "83b5da6ab8ab9a906717fda7aa66dccb",
"text": "Image quality assessment (IQA) tries to estimate human perception based image visual quality in an objective manner. Existing approaches target this problem with or without reference images. For no-reference image quality assessment, there is no given reference image or any knowledge of the distortion type of the image. Previous approaches measure the image quality from signal level rather than semantic analysis. They typically depend on various features to represent local characteristic of an image. In this paper we propose a new no-reference (NR) image quality assessment (IQA) framework based on semantic obviousness. We discover that semantic-level factors affect human perception of image quality. With such observation, we explore semantic obviousness as a metric to perceive objects of an image. We propose to extract two types of features, one to measure the semantic obviousness of the image and the other to discover local characteristic. Then the two kinds of features are combined for image quality estimation. The principles proposed in our approach can also be incorporated with many existing IQA algorithms to boost their performance. We evaluate our approach on the LIVE dataset. Our approach is demonstrated to be superior to the existing NR-IQA algorithms and comparable to the state-of-the-art full-reference IQA (FR-IQA) methods. Cross-dataset experiments show the generalization ability of our approach.",
"title": ""
},
{
"docid": "ace30c4ad4a74f1ba526b4868e47b5c5",
"text": "China and India are home to two of the world's largest populations, and both populations are aging rapidly. Our data compare health status, risk factors, and chronic diseases among people age forty-five and older in China and India. By 2030, 65.6 percent of the Chinese and 45.4 percent of the Indian health burden are projected to be borne by older adults, a population with high levels of noncommunicable diseases. Smoking (26 percent in both China and India) and inadequate physical activity (10 percent and 17.7 percent, respectively) are highly prevalent. Health policy and interventions informed by appropriate data will be needed to avert this burden.",
"title": ""
},
{
"docid": "83533345743229694e055c27240b295c",
"text": "OBJECTIVES\nTo assess existing research on the effects of various interventions on levels of bicycling. Interventions include infrastructure (e.g., bike lanes and parking), integration with public transport, education and marketing programs, bicycle access programs, and legal issues.\n\n\nMETHODS\nA comprehensive search of peer-reviewed and non-reviewed research identified 139 studies. Study methodologies varied considerably in type and quality, with few meeting rigorous standards. Secondary data were gathered for 14 case study cities that adopted multiple interventions.\n\n\nRESULTS\nMany studies show positive associations between specific interventions and levels of bicycling. The 14 case studies show that almost all cities adopting comprehensive packages of interventions experienced large increases in the number of bicycle trips and share of people bicycling.\n\n\nCONCLUSIONS\nMost of the evidence examined in this review supports the crucial role of public policy in encouraging bicycling. Substantial increases in bicycling require an integrated package of many different, complementary interventions, including infrastructure provision and pro-bicycle programs, supportive land use planning, and restrictions on car use.",
"title": ""
},
{
"docid": "4f51f0e0f4aa51251575695572964f59",
"text": "The problem of extracting features from given input data is of critical importance for the successful application of machine learning. Feature extraction, as usually understood, seeks an optimal transformation from input data into a (typically real-valued) feature vector that can be used as an input for a learning algorithm. Over time, this problem has been attacked using a growing number of diverse techniques that originated in separate research communities, including feature selection, dimensionality reduction, manifold learning, distance metric learning and representation learning. The goal of this paper is to contrast and compare feature extraction techniques coming from different machine learning areas, discuss the modern challenges and open problems in feature extraction and suggest novel solutions to some of them.",
"title": ""
},
{
"docid": "5a6f964ceb0e06ec429456bca9b7764d",
"text": "The increasing availability of rich and complex data in a variety of scientific domains poses a pressing need for tools to enable scientists to rapidly make sense of and gather insights from data. One proposed solution is to design visual query systems (VQSs) that allow scientists to search for desired patterns in their datasets. While many existing VQSs promise to accelerate exploratory data analysis by facilitating this search, they are unfortunately not widely used in practice. Through a year-long collaboration with scientists in three distinct domains—astronomy, genetics, and material science— we study the impact of various features within VQSs that can aid rapid visual data analysis, and how VQSs fit into a scientists’ analysis workflow. Our findings offer design guidelines for improving the usability and adoption of next-generation VQSs, paving the way for VQSs to be applied to a variety of scientific domains.",
"title": ""
},
{
"docid": "30c67c52cb258f86998263b378e0c66d",
"text": "This paper presents a robust and efficient method for license plate detection with the purpose of accurately localizing vehicle license plates from complex scenes in real time. A simple yet effective image downscaling method is first proposed to substantially accelerate license plate localization without sacrificing detection performance compared with that achieved using the original image. Furthermore, a novel line density filter approach is proposed to extract candidate regions, thereby significantly reducing the area to be analyzed for license plate localization. Moreover, a cascaded license plate classifier based on linear support vector machines using color saliency features is introduced to identify the true license plate from among the candidate regions. For performance evaluation, a data set consisting of 3977 images captured from diverse scenes under different conditions is also presented. Extensive experiments on the widely used Caltech license plate data set and our newly introduced data set demonstrate that the proposed approach substantially outperforms state-of-the-art methods in terms of both detection accuracy and run-time efficiency, increasing the detection ratio from 91.09% to 96.62% while decreasing the run time from 672 to 42 ms for processing an image with a resolution of $1082\\times 728$ . The executable code and our collected data set are publicly available.",
"title": ""
},
{
"docid": "c22b598200cf68ab26c0c92cbb182b4a",
"text": "With the rise of Web-based applications, it is both important and feasible for human-computer interaction practitioners to measure a product’s user experience. While quantifying user attitudes at a small scale has been heavily studied, in this industry case study, we detail best Happiness Tracking Surveys (HaTS) for collecting attitudinal data at a large scale directly in the product and over time. This method was developed at Google to track attitudes and open-ended feedback over time, and to characterize products’ user bases. This case study of HaTS goes beyond the design of the questionnaire to also suggest best practices for appropriate sampling, invitation techniques, and its data analysis. HaTS has been deployed successfully across dozens of Google’s products to measure progress towards product goals and to inform product decisions; its sensitivity to product changes has been demonstrated widely. We are confident that teams in other organizations will be able to embrace HaTS as well, and, if necessary, adapt it for their unique needs.",
"title": ""
},
{
"docid": "d010a2f8240ff9f6704cde917cb85cf0",
"text": "OBJECTIVE\nAlthough psychological modulation of immune function is now a well-established phenomenon, much of the relevant literature has been published within the last decade. This article speculates on future directions for psychoneuroimmunology research, after reviewing the history of the field.\n\n\nMETHODS\nThis review focuses on human psychoneuroimmunology studies published since 1939, particularly those that have appeared in Psychosomatic Medicine. Studies were clustered according to key themes, including stressor duration and characteristics (laboratory stressors, time-limited naturalistic stressors, or chronic stress), as well as the influences of psychopathology, personality, and interpersonal relationships; the responsiveness of the immune system to behavioral interventions is also addressed. Additionally, we describe trends in populations studied and the changing nature of immunological assessments. The final section focuses on health outcomes and future directions for the field.\n\n\nRESULTS\nThere are now sufficient data to conclude that immune modulation by psychosocial stressors or interventions can lead to actual health changes, with the strongest direct evidence to date in infectious disease and wound healing. Furthermore, recent medical literature has highlighted a spectrum of diseases whose onset and course may be influenced by proinflammatory cytokines, from cardiovascular disease to frailty and functional decline; proinflammatory cytokine production can be directly stimulated by negative emotions and stressful experiences and indirectly stimulated by chronic or recurring infections. Accordingly, distress-related immune dysregulation may be one core mechanism behind a diverse set of health risks associated with negative emotions.\n\n\nCONCLUSIONS\nWe suggest that psychoneuroimmunology may have broad implications for the basic biological sciences and medicine.",
"title": ""
},
{
"docid": "22f3fbb237d6c31bb01e5bb576c7ce66",
"text": "Author Information Baltimore, MD Our propensity to believe that an outline of the human brain is inscribed in one of the most famous paintings of the godhead is both a reflection of the revolution initiated by the Renaissance and a function of how the organ of our humanity psychophysically interprets images. In 1306, Giotto completed a cycle of frescoes in the Arena Chapel in Padua and signed them, thus forsaking the anonymity of the old art of the Middle Ages and Byzantium and initiating the Italian Renaissance, a period in which the personality of the Western artist became a conspicuous component of his artwork. The figures in Giotto's paintings bore strong resemblances to living models, and the emotive content of their faces was true to life. The rediscovery of Greek and Roman artifacts, with their idealized conception of the human form, and the importation into Europe via Arab scholars of the humanistic works of Greek philosophers prepared the Italian soil for the straightforward assertion of individual identity and the inexorable evolution of the cultural craftsman from artisan to artist. Whereas the primitive simplifications and flatness of Byzantium remained a strong influence on artists in Spain and in the Germanic north, the innovations of Giotto and his successors radically changed the depiction of the human form throughout the south; as a consequence of the discovery of single point perspective before 1413 by the architect Filippo Brunellschi (4), this revolution reached its apogee towards the end of the fifteenth century in the city-states of Florence, Venice, and Rome. Almost two centuries after the completion of the Arena Chapel, artists regularly received commissions for both secular and religious subjects; in each realm they attempted to meld the religious prescriptions of the Catholic",
"title": ""
},
{
"docid": "b75a73a61c3934c76d754069c0834b99",
"text": "Software re-engineering can dramatically improve an organization’s ability to maintain and upgrade its legacy production systems. But the risks that accompany traditional re-engineering tend to offset the potential benefits. Incremental software re-engineering is the practice of re-engineering a system’s software components on a phased basis, and then re-incorporating those components into production also on a phased basis. Incremental software re-engineering allows for safer re-engineering, increased flexibility and more immediate return on investment. But commercial automation to support incremental software re-engineering is currently weak. In addition, project managers need a methodology to plan and implement software re-engineering projects based on the incremental approach. This paper covers the advantages of incremental software re-engineering and what is available concerning support technology. The paper describes a process methodology for planning and implementing incremental software re-engineering projects. Finally, gaps in the support technology are identified with suggestions for future tools from vendors. 1998 John Wiley & Sons, Ltd.",
"title": ""
}
] |
scidocsrr
|
8260303b7e590d8e86f700a350832b4e
|
ChEMBL: a large-scale bioactivity database for drug discovery
|
[
{
"docid": "e9326cb2e3b79a71d9e99105f0259c5a",
"text": "Although drugs are intended to be selective, at least some bind to several physiological targets, explaining side effects and efficacy. Because many drug–target combinations exist, it would be useful to explore possible interactions computationally. Here we compared 3,665 US Food and Drug Administration (FDA)-approved and investigational drugs against hundreds of targets, defining each target by its ligands. Chemical similarities between drugs and ligand sets predicted thousands of unanticipated associations. Thirty were tested experimentally, including the antagonism of the β1 receptor by the transporter inhibitor Prozac, the inhibition of the 5-hydroxytryptamine (5-HT) transporter by the ion channel drug Vadilex, and antagonism of the histamine H4 receptor by the enzyme inhibitor Rescriptor. Overall, 23 new drug–target associations were confirmed, five of which were potent (<100 nM). The physiological relevance of one, the drug N,N-dimethyltryptamine (DMT) on serotonergic receptors, was confirmed in a knockout mouse. The chemical similarity approach is systematic and comprehensive, and may suggest side-effects and new indications for many drugs.",
"title": ""
}
] |
[
{
"docid": "c6baff0d600c76fac0be9a71b4238990",
"text": "Nature has provided rich models for computational problem solving, including optimizations based on the swarm intelligence exhibited by fireflies, bats, and ants. These models can stimulate computer scientists to think nontraditionally in creating tools to address application design challenges.",
"title": ""
},
{
"docid": "d552b6beeea587bc014a4c31cabee121",
"text": "Recent successes of neural networks in solving combinatorial problems and games like Go, Poker and others inspire further attempts to use deep learning approaches in discrete domains. In the field of automated planning, the most popular approach is informed forward search driven by a heuristic function which estimates the quality of encountered states. Designing a powerful and easily-computable heuristics however is still a challenging problem on many domains. In this paper, we use machine learning to construct such heuristic automatically. We train a neural network to predict a minimal number of moves required to solve a given instance of Rubik’s cube. We then use the trained network as a heuristic distance estimator with a standard forward-search algorithm and compare the results with other heuristics. Our experiments show that the learning approach is competitive with state-of-the-art and might be the best choice in some use-case scenarios.",
"title": ""
},
{
"docid": "87ded3ada9aa454d8f9a914ef92ccc4a",
"text": "We advocate the use of a new distribution family—the transelliptical—for robust inference of high dimensional graphical models. The transelliptical family is an extension of the nonparanormal family proposed by Liu et al. (2009). Just as the nonparanormal extends the normal by transforming the variables using univariate functions, the transelliptical extends the elliptical family in the same way. We propose a nonparametric rank-based regularization estimator which achieves the parametric rates of convergence for both graph recovery and parameter estimation. Such a result suggests that the extra robustness and flexibility obtained by the semiparametric transelliptical modeling incurs almost no efficiency loss. We also discuss the relationship between this work with the transelliptical component analysis proposed by Han and Liu (2012).",
"title": ""
},
{
"docid": "4cdef79370abcd380357c8be92253fa5",
"text": "In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures. Experiments using data from the Prague Dependency Treebank show that the combined system can handle nonprojective constructions with a precision sufficient to yield a significant improvement in overall parsing accuracy. This leads to the best reported performance for robust non-projective parsing of Czech.",
"title": ""
},
{
"docid": "bae1f44165387e086868efecf318ecd2",
"text": "Clustering graphs under the Stochastic Block Model (SBM) and extensions are well studied. Guarantees of correctness exist under the assumption that the data is sampled from a model. In this paper, we propose a framework, in which we obtain “correctness” guarantees without assuming the data comes from a model. The guarantees we obtain depend instead on the statistics of the data that can be checked. We also show that this framework ties in with the existing model-based framework, and that we can exploit results in model-based recovery, as well as strengthen the results existing in that area of research.",
"title": ""
},
{
"docid": "ffa1dcdc856d2400defba36ed155bfdc",
"text": "The theory of possibility described in this paper is related to the theory of fuzzy sets by defining the concept of a possibility distribution as a fuzzy restriction which acts as an elastic constraint on the values that may be assigned to a variable. More specifically, if F is a fuzzy subset of a universe of discourse U = {u} which is characterized by its membership function/~r, then a proposition of the form \"X is F,\" where X is a variable taking values in U, induces a possibility distribution Hx which equates the possibility of X taking the value u to/~r.(u)--the compatibility of u with F. In this way, X becomes a fuzzy variable which is associated with the possibility distribution Fix in much the same way as a random variable is associated with a probability distribution. In general, a variable may be associated both with a possibility distribution and a probability distribution, with the weak connection between the two expressed as the possibility/probability consistency principle. A thesis advanced in this paper is that the imprecision that is intrinsic in natural languages is, in the main, possibilistic rather than probabilistic in nature. Thus, by employing the concept of a possibility distribution, a proposition, p, in a natural language may be translated into a procedure which computes the probability distribution of a set of attributes which are implied by p. Several types of conditional translation rules are discussed and, in particular, a translation rule r,~r propositions of the form\"X is F is ~-possible, \"~ where ~ is a number in the interval [0, ! ], is formulate~ and illustrated by examples.",
"title": ""
},
{
"docid": "c2b1dd2d2dd1835ed77cf6d43044eed8",
"text": "The artificial neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs. By contrast, the computer vision community uses complicated, hand-engineered features, like SIFT [6], that produce a whole vector of outputs including an explicit representation of the pose of the feature. We show how neural networks can be used to learn features that output a whole vector of instantiation parameters and we argue that this is a much more promising way of dealing with variations in position, orientation, scale and lighting than the methods currently employed in the neural networks community. It is also more promising than the handengineered features currently used in computer vision because it provides an efficient way of adapting the features to the domain.",
"title": ""
},
{
"docid": "345f54e3a6d00ecb734de529ed559933",
"text": "Size and cost of a switched mode power supply can be reduced by increasing the switching frequency. The maximum switching frequency and the maximum input voltage range, respectively, is limited by the minimum propagated on-time pulse, which is mainly determined by the level shifter speed. At switching frequencies above 10 MHz, a voltage conversion with an input voltage range up to 50 V and output voltages below 5 V requires an on-time of a pulse width modulated signal of less than 5 ns. This cannot be achieved with conventional level shifters. This paper presents a level shifter circuit, which controls an NMOS power FET on a high-voltage domain up to 50 V. The level shifter was implemented as part of a DCDC converter in a 180 nm BiCMOS technology. Experimental results confirm a propagation delay of 5 ns and on-time pulses of less than 3 ns. An overlapping clamping structure with low parasitic capacitances in combination with a high-speed comparator makes the level shifter also very robust against large coupling currents during high-side transitions as fast as 20 V/ns, verified by measurements. Due to the high dv/dt, capacitive coupling currents can be two orders of magnitude larger than the actual signal current. Depending on the conversion ratio, the presented level shifter enables an increase of the switching frequency for multi-MHz converters towards 100 MHz. It supports high input voltages up to 50 V and it can be applied also to other high-speed applications.",
"title": ""
},
{
"docid": "ae04395194c7079aecd95e3b1efb7b50",
"text": "Various methods for the estimation of populations of algae and other small freshwater organisms are described. A method of counting is described in detail. It is basically that of Utermöhl and uses an inverted microscope. If the organisms are randomly distributed, a single count is sufficient to obtain an estimate of their abundance and confidence limits for this estimate, even if pipetting, dilution or concentration are involved. The errors in the actual counting and in converting colony counts to cell numbers are considered and found to be small relative to the random sampling error. Data are also given for a variant of Utermöhl's method using a normal microscope and for a method of using a haemocytometer for the larger plankton algae.",
"title": ""
},
{
"docid": "4ce0ba9266d5a73fb3a120a19510857c",
"text": "This paper presents a novel linear time-varying model predictive controller (LTV-MPC) using a sparse clothoid-based path description: a LTV-MPCC. Clothoids are used world-wide in road design since they allow smooth driving associated with low jerk values. The formulation of the MPC controller is based on the fact that the path of a vehicle traveling at low speeds defines a segment of clothoids if the steering angle is chosen to vary piecewise linearly. Therefore, we can compute the vehicle motion as clothoid parameters and translate them to vehicle inputs. We present simulation results that demonstrate the ability of the controller to produce a very comfortable and smooth driving while maintaining a tracking accuracy comparable to that of a regular LTV-MPC. While the regular MPC controllers use path descriptions where waypoints are close to each other, our LTV-MPCC has the ability of using paths described by very sparse waypoints. In this case, each pair of waypoints describes a clothoid segment and the cost function minimization is performed in a more efficient way allowing larger prediction distances to be used. This paper also presents a novel algorithm that addresses the problem of path sparsification using a reduced number of clothoid segments. The path sparsification enables a path description using few waypoints with almost no loss of detail. The detail of the reconstruction is an adjustable parameter of the algorithm. The higher the required detail, the more clothoid segments are used.",
"title": ""
},
{
"docid": "6c07a47e1b691f492a7efa6c64d13e06",
"text": "Four studies investigate the relationship between individuals' mood and their reliance on the ease retrieval heuristic. Happy participants were consistently more likely to rely on the ease of retrieval heuristic, whereas sad participants were more likely to rely on the activated content. Additional analyses indicate that this pattern is not due to a differential recall (Experiment 2) and that happy participants ceased to rely on the ease of retrieval when the diagnosticity of this information was called into question (Experiment 3). Experiment 4 shows that reliance on the ease of retrieval heuristic resulted in faster judgments than reliance on content, with the former but not the latter being a function of the amount of activated information.",
"title": ""
},
{
"docid": "c749e0a0ae26f95bd8baedfa6e8c5f05",
"text": "This paper proposes a new polynomial time constant factor approximation algorithm for a more-a-decade-long open NP-hard problem, the minimum four-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in unit disk graph UDG with any positive integer <inline-formula> <tex-math notation=\"LaTeX\">$m \\geq 1$ </tex-math></inline-formula> for the first time in the literature. We observe that it is difficult to modify the existing constant factor approximation algorithm for the minimum three-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem to solve the minimum four-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in UDG due to the structural limitation of Tutte decomposition, which is the main graph theory tool used by Wang <i>et al.</i> to design their algorithm. To resolve this issue, we first reinvent a new constant factor approximation algorithm for the minimum three-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in UDG and later use this algorithm to design a new constant factor approximation algorithm for the minimum four-connected <inline-formula> <tex-math notation=\"LaTeX\">$m$ </tex-math></inline-formula>-dominating set problem in UDG.",
"title": ""
},
{
"docid": "2f7944399a1f588d1b11d3cf7846af1c",
"text": "Corrosion can cause section loss or cracks in the steel members which is one of the most important causes of deterioration of steel bridges. For some critical components of a steel bridge, it is fatal and could even cause the collapse of the whole bridge. Nowadays the most common approach to steel bridge inspection is visual inspection by inspectors with inspection trucks. This paper mainly presents a climbing robot with magnetic wheels which can move on the surface of steel bridge. Experiment results shows that the climbing robot can move on the steel bridge freely without disrupting traffic to reduce the risks to the inspectors.",
"title": ""
},
{
"docid": "2fcd7e151c658e29cacda5c4f5542142",
"text": "The connection between gut microbiota and energy homeostasis and inflammation and its role in the pathogenesis of obesity-related disorders are increasingly recognized. Animals models of obesity connect an altered microbiota composition to the development of obesity, insulin resistance, and diabetes in the host through several mechanisms: increased energy harvest from the diet, altered fatty acid metabolism and composition in adipose tissue and liver, modulation of gut peptide YY and glucagon-like peptide (GLP)-1 secretion, activation of the lipopolysaccharide toll-like receptor-4 axis, and modulation of intestinal barrier integrity by GLP-2. Instrumental for gut microbiota manipulation is the understanding of mechanisms regulating gut microbiota composition. Several factors shape the gut microflora during infancy: mode of delivery, type of infant feeding, hospitalization, and prematurity. Furthermore, the key importance of antibiotic use and dietary nutrient composition are increasingly recognized. The role of the Western diet in promoting an obesogenic gut microbiota is being confirmation in subjects. Following encouraging results in animals, several short-term randomized controlled trials showed the benefit of prebiotics and probiotics on insulin sensitivity, inflammatory markers, postprandial incretins, and glucose tolerance. Future research is needed to unravel the hormonal, immunomodulatory, and metabolic mechanisms underlying microbe-microbe and microbiota-host interactions and the specific genes that determine the health benefit derived from probiotics. While awaiting further randomized trials assessing long-term safety and benefits on clinical end points, a healthy lifestyle--including breast lactation, appropriate antibiotic use, and the avoidance of excessive dietary fat intake--may ensure a friendly gut microbiota and positively affect prevention and treatment of metabolic disorders.",
"title": ""
},
{
"docid": "d12d51010fcf4433c5a74a6fbead5cb5",
"text": "This paper introduces the power-density and temperature induced issues in the modern on-chip systems. In particular, the emerging Dark Silicon problem is discussed along with critical research challenges. Afterwards, an overview of key research efforts and concepts is presented that leverage dark silicon for performance and reliability optimization. In case temperature constraints are violated, an efficient dynamic thermal management technique is employed.",
"title": ""
},
{
"docid": "3a37bf4ffad533746d2335f2c442a6d6",
"text": "Keyphrase extraction is the task of identifying single or multi-word expressions that represent the main topics of a document. In this paper we present TopicRank, a graph-based keyphrase extraction method that relies on a topical representation of the document. Candidate keyphrases are clustered into topics and used as vertices in a complete graph. A graph-based ranking model is applied to assign a significance score to each topic. Keyphrases are then generated by selecting a candidate from each of the topranked topics. We conducted experiments on four evaluation datasets of different languages and domains. Results show that TopicRank significantly outperforms state-of-the-art methods on three datasets.",
"title": ""
},
{
"docid": "95db9ce9faaf13e8ff8d5888a6737683",
"text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: [email protected] Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:",
"title": ""
},
{
"docid": "6176a2fd4e07d0c72a53c6207af305ca",
"text": "At present, Bluetooth Low Energy (BLE) is dominantly used in commercially available Internet of Things (IoT) devices -- such as smart watches, fitness trackers, and smart appliances. Compared to classic Bluetooth, BLE has been simplified in many ways that include its connection establishment, data exchange, and encryption processes. Unfortunately, this simplification comes at a cost. For example, only a star topology is supported in BLE environments and a peripheral (an IoT device) can communicate with only one gateway (e.g. a smartphone, or a BLE hub) at a set time. When a peripheral goes out of range, it loses connectivity to a gateway, and cannot connect and seamlessly communicate with another gateway without user interventions. In other words, BLE connections do not get automatically migrated or handed-off to another gateway. In this paper, we propose a system which brings seamless connectivity to BLE-capable mobile IoT devices in an environment that consists of a network of gateways. Our framework ensures that unmodified, commercial off-the-shelf BLE devices seamlessly and securely connect to a nearby gateway without any user intervention.",
"title": ""
},
{
"docid": "79934e1cb9a6c07fb965da9674daeb69",
"text": "BACKGROUND\nAtrophic scars can complicate moderate and severe acne. There are, at present, several modalities of treatment with different results. Percutaneous collagen induction (PCI) has recently been proposed as a simple and effective therapeutic option for the management of atrophic scars.\n\n\nOBJECTIVE\nThe aim of our study was to analyze the efficacy and safety of percutaneous collagen induction for the treatment of acne scarring in different skin phototypes.\n\n\nMETHODS & MATERIALS\nA total of 60 patients of skin types phototype I to VI were included in the study. They were divided into three groups before beginning treatment: Group A (phototypes I to II), Group B (phototypes III to V), and Group C (phototypes VI). Each patient had three treatments at monthly intervals. The aesthetic improvement was evaluated by using a Global Aesthetic Improvement Scale (GAIS), and analyzed statistically by computerized image analysis of the patients' photographs. The differences in the GAIS scores in the different time-points of each group were found using the Wilcoxon's test for nonparametric-dependent continuous variables. Computerized image analysis of silicone replicas was used to quantify the irregularity of the surface micro-relief with Fast Fourier Transformation (FFT); average values of gray were obtained along the x- and y-axes. The calculated indexes were the integrals of areas arising from the distribution of pixels along the axes.\n\n\nRESULTS\nAll patients completed the study. The Wilcoxon's test for nonparametric-dependent continuous variables showed a statistically significant (p < 0.05) reduction in severity grade of acne scars at T5 compared to baseline (T1). The analysis of the surface micro-relief performed on skin replicas showed a decrease in the degree of irregularity of skin texture in all three groups of patients, with an average reduction of 31% in both axes after three sessions. No short- or long-term dyschromia was observed.\n\n\nCONCLUSION\nPCI offers a simple and safe modality to improve the appearance of acne scars without risk of dyspigmentation in patient of all skin types.",
"title": ""
},
{
"docid": "9951ef687bdf5f01f8d4a38b1120c459",
"text": "Urban ecosystems evolve over time and space as the outcome of dynamic interactions between socio-economic and biophysical processes operating over multiple scales. The ecological resilience of urban ecosystems—the degree to which they tolerate alteration before reorganizing around a new set of structures and processes—is influenced by these interactions. In cities and urbanizing areas fragmentation of natural habitats, simplification and homogenization of species composition, disruption of hydrological systems, and alteration of energy flow and nutrient cycling reduce cross-scale resilience, leaving systems increasingly vulnerable to shifts in system control and structure. Because varied urban development patterns affect the amount and interspersion of built and natural land cover, as well as the human demands on ecosystems differently, we argue that alternative urban patterns (i.e., urban form, land use distribution, and connectivity) generate varied effects on ecosystem dynamics and their ecological resilience. We build on urban economics, landscape ecology, population dynamics, and complex system science to propose a conceptual model and a set of hypotheses that explicitly link urban pattern to human and ecosystem functions in urban ecosystems. Drawing on preliminary results from an empirical study of the relationships between urban pattern and bird and aquatic macroinvertebrate diversity in the Puget Sound region, we propose that resilience in urban ecosystems is a function of the patterns of human activities and natural habitats that control and are controlled by both socio-economic and biophysical processes operating at various scales. We discuss the implications of this conceptual model for urban planning and design.",
"title": ""
}
] |
scidocsrr
|
72421606941910464582042677d9730c
|
Role of Dopamine Receptors in ADHD: A Systematic Meta-analysis
|
[
{
"docid": "39a25e2a4b3e4d56345d0e268d4a1cb1",
"text": "OBJECTIVE\nAttention deficit hyperactivity disorder is a heterogeneous disorder of unknown etiology. Little is known about the comorbidity of this disorder with disorders other than conduct. Therefore, the authors made a systematic search of the psychiatric and psychological literature for empirical studies dealing with the comorbidity of attention deficit hyperactivity disorder with other disorders.\n\n\nDATA COLLECTION\nThe search terms included hyperactivity, hyperkinesis, attention deficit disorder, and attention deficit hyperactivity disorder, cross-referenced with antisocial disorder (aggression, conduct disorder, antisocial disorder), depression (depression, mania, depressive disorder, bipolar), anxiety (anxiety disorder, anxiety), learning problems (learning, learning disability, academic achievement), substance abuse (alcoholism, drug abuse), mental retardation, and Tourette's disorder.\n\n\nFINDINGS\nThe literature supports considerable comorbidity of attention deficit hyperactivity disorder with conduct disorder, oppositional defiant disorder, mood disorders, anxiety disorders, learning disabilities, and other disorders, such as mental retardation, Tourette's syndrome, and borderline personality disorder.\n\n\nCONCLUSIONS\nSubgroups of children with attention deficit hyperactivity disorder might be delineated on the basis of the disorder's comorbidity with other disorders. These subgroups may have differing risk factors, clinical courses, and pharmacological responses. Thus, their proper identification may lead to refinements in preventive and treatment strategies. Investigation of these issues should help to clarify the etiology, course, and outcome of attention deficit hyperactivity disorder.",
"title": ""
}
] |
[
{
"docid": "b59a2c49364f3e95a2c030d800d5f9ce",
"text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.",
"title": ""
},
{
"docid": "7dc0be689a4c58f4bc6ee0624605df81",
"text": "Oil spills represent a major threat to ocean ecosystems and their health. Illicit pollution requires continuous monitoring and satellite remote sensing technology represents an attractive option for operational oil spill detection. Previous studies have shown that active microwave satellite sensors, particularly Synthetic Aperture Radar (SAR) can be effectively used for the detection and classification of oil spills. Oil spills appear as dark spots in SAR images. However, similar dark spots may arise from a range of unrelated meteorological and oceanographic phenomena, resulting in misidentification. A major focus of research in this area is the development of algorithms to distinguish oil spills from `look-alikes'. This paper describes the development of a new approach to SAR oil spill detection employing two different Artificial Neural Networks (ANN), used in sequence. The first ANN segments a SAR image to identify pixels belonging to candidate oil spill features. A set of statistical feature parameters are then extracted and used to drive a second ANN which classifies objects into oil spills or look-alikes. The proposed algorithm was trained using 97 ERS-2 SAR and ENVSAT ASAR images of individual verified oil spills or/and look-alikes. The algorithm was validated using a large dataset comprising full-swath images and correctly identified 91.6% of reported oil spills and 98.3% of look-alike phenomena. The segmentation stage of the new technique outperformed the established edge detection and adaptive thresholding approaches. An analysis of feature descriptors highlighted the importance of image gradient information in the classification stage.",
"title": ""
},
{
"docid": "0ae8f9626a6621949c9d6c5fa7c2a098",
"text": "In this paper, numerical observability analysis is restudied. Algorithms to determine observable islands and to decide a minimal set of pseudo-measurements to make the unobservable system observable are presented. The algorithms make direct use of the measurement Jacobian matrix. Gaussian elimination, which makes the whole process of observability analysis simple and effective, is the only computation required by the algorithms. Numerical examples are used to illustrate the proposed algorithms. Comparison of computation expense on the Texas system among the proposed algorithm and the existing algorithms is performed.",
"title": ""
},
{
"docid": "b1a440cb894c1a76373bdbf7ff84318d",
"text": "We present a language-theoretic approach to symbolic model checking of PCTL over discrete-time Markov chains. The probability with which a path formula is satisfied is represented by a regular expression. A recursive evaluation of the regular expression yields an exact rational value when transition probabilities are rational, and rational functions when some probabilities are left unspecified as parameters of the system. This allows for parametric model checking by evaluating the regular expression for different parameter values, for instance, to study the influence of a lossy channel in the overall reliability of a randomized protocol.",
"title": ""
},
{
"docid": "fd0defe3aaabd2e27c7f9d3af47dd635",
"text": "A fast test for triangle-triangle intersection by computing signed vertex-plane distances (sufficient if one triangle is wholly to one side of the other) and signed line-line distances of selected edges (otherwise) is presented. This algorithm is faster than previously published algorithms and the code is available online.",
"title": ""
},
{
"docid": "c21e39d4cf8d3346671ae518357c8edb",
"text": "The success of deep learning depends on finding an architecture to fit the task. As deep learning has scaled up to more challenging tasks, the architectures have become difficult to design by hand. This paper proposes an automated method, CoDeepNEAT, for optimizing deep learning architectures through evolution. By extending existing neuroevolution methods to topology, components, and hyperparameters, this method achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. It also supports building a real-world application of automated image captioning on a magazine website. Given the anticipated increases in available computing power, evolution of deep networks is promising approach to constructing deep learning applications in the future.",
"title": ""
},
{
"docid": "8e4eb520c80dfa8d39c69b1273ea89c8",
"text": "This paper examines the potential impact of automatic meter reading (AMR) on short-term load forecasting for a residential customer. Real-time measurement data from customers' smart meters provided by a utility company is modeled as the sum of a deterministic component and a Gaussian noise signal. The shaping filter for the Gaussian noise is calculated using spectral analysis. Kalman filtering is then used for load prediction. The accuracy of the proposed method is evaluated for different sampling periods and planning horizons. The results show that the availability of more real-time measurement data improves the accuracy of the load forecast significantly. However, the improved prediction accuracy can come at a high computational cost. Our results qualitatively demonstrate that achieving the desired prediction accuracy while avoiding a high computational load requires limiting the volume of data used for prediction. Consequently, the measurement sampling rate must be carefully selected as a compromise between these two conflicting requirements.",
"title": ""
},
{
"docid": "a55eed627afaf39ee308cc9e0e10a698",
"text": "Perspective-taking is a complex cognitive process involved in social cognition. This positron emission tomography (PET) study investigated by means of a factorial design the interaction between the emotional and the perspective factors. Participants were asked to adopt either their own (first person) perspective or the (third person) perspective of their mothers in response to situations involving social emotions or to neutral situations. The main effect of third-person versus first-person perspective resulted in hemodynamic increase in the medial part of the superior frontal gyrus, the left superior temporal sulcus, the left temporal pole, the posterior cingulate gyrus, and the right inferior parietal lobe. A cluster in the postcentral gyrus was detected in the reverse comparison. The amygdala was selectively activated when subjects were processing social emotions, both related to self and other. Interaction effects were identified in the left temporal pole and in the right postcentral gyrus. These results support our prediction that the frontopolar, the somatosensory cortex, and the right inferior parietal lobe are crucial in the process of self/ other distinction. In addition, this study provides important building blocks in our understanding of social emotion processing and human empathy.",
"title": ""
},
{
"docid": "afde8e4d9ed4b2a95d780522e7905047",
"text": "Compared to offline shopping, the online shopping experience may be viewed as lacking human warmth and sociability as it is more impersonal, anonymous, automated and generally devoid of face-to-face interactions. Thus, understanding how to create customer loyalty in online environments (e-Loyalty) is a complex process. In this paper a model for e-Loyalty is proposed and used to examine how varied conditions of social presence in a B2C e-Services context influence e-Loyalty and its antecedents of perceived usefulness, trust and enjoyment. This model is examined through an empirical study involving 185 subjects using structural equation modeling techniques. Further analysis is conducted to reveal gender differences concerning hedonic elements in the model on e-Loyalty. 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "cfc4dc24378c5b7b83586db56fad2cac",
"text": "This study investigated the effects of proximal and distal constructs on adolescent's academic achievement through self-efficacy. Participants included 482 ninth- and tenth- grade Norwegian students who completed a questionnaire designed to assess school-goal orientations, organizational citizenship behavior, academic self-efficacy, and academic achievement. The results of a bootstrapping technique used to analyze relationships between the constructs indicated that school-goal orientations and organizational citizenship predicted academic self-efficacy. Furthermore, school-goal orientation, organizational citizenship, and academic self-efficacy explained 46% of the variance in academic achievement. Mediation analyses revealed that academic self-efficacy mediated the effects of perceived task goal structure, perceived ability structure, civic virtue, and sportsmanship on adolescents' academic achievements. The results are discussed in reference to current scholarship, including theories underlying our hypothesis. Practical implications and directions for future research are suggested.",
"title": ""
},
{
"docid": "33d98005d696cc5cee6a23f5c1e7c538",
"text": "Design activity has recently attempted to embrace designing the user experience. Designers need to demystify how we design for user experience and how the products we design achieve specific user experience goals. This paper proposes an initial framework for understanding experience as it relates to user-product interactions. We propose a system for talking about experience, and look at what influences experience and qualities of experience. The framework is presented as a tool to understand what kinds of experiences products evoke.",
"title": ""
},
{
"docid": "e98e902e22d9b8acb6e9e9dcd241471c",
"text": "We introduce a novel iterative approach for event coreference resolution that gradually builds event clusters by exploiting inter-dependencies among event mentions within the same chain as well as across event chains. Among event mentions in the same chain, we distinguish withinand cross-document event coreference links by using two distinct pairwise classifiers, trained separately to capture differences in feature distributions of withinand crossdocument event clusters. Our event coreference approach alternates between WD and CD clustering and combines arguments from both event clusters after every merge, continuing till no more merge can be made. And then it performs further merging between event chains that are both closely related to a set of other chains of events. Experiments on the ECB+ corpus show that our model outperforms state-of-the-art methods in joint task of WD and CD event coreference resolution.",
"title": ""
},
{
"docid": "fbcdb57ae0d42e9665bc95dbbca0d57b",
"text": "Data classification and tag recommendation are both important and challenging tasks in social media. These two tasks are often considered independently and most efforts have been made to tackle them separately. However, labels in data classification and tags in tag recommendation are inherently related. For example, a Youtube video annotated with NCAA, stadium, pac12 is likely to be labeled as football, while a video/image with the class label of coast is likely to be tagged with beach, sea, water and sand. The existence of relations between labels and tags motivates us to jointly perform classification and tag recommendation for social media data in this paper. In particular, we provide a principled way to capture the relations between labels and tags, and propose a novel framework CLARE, which fuses data CLAssification and tag REcommendation into a coherent model. With experiments on three social media datasets, we demonstrate that the proposed framework CLARE achieves superior performance on both tasks compared to the state-of-the-art methods.",
"title": ""
},
{
"docid": "1de10e40580ba019045baaa485f8e729",
"text": "Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.",
"title": ""
},
{
"docid": "0d7c29b40f92b5997791f1bbe192269c",
"text": "We present a general approach to video understanding, inspired by semantic transfer techniques that have been successfully used for 2D image analysis. Our method considers a video to be a 1D sequence of clips, each one associated with its own semantics. The nature of these semantics – natural language captions or other labels – depends on the task at hand. A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video. We describe two matching methods, both designed to ensure that (a) reference clips appear similar to test clips and (b), taken together, the semantics of the selected reference clips is consistent and maintains temporal coherence. We use our method for video captioning on the LSMDC’16 benchmark, video summarization on the SumMe and TV-Sum benchmarks, Temporal Action Detection on the Thumos2014 benchmark, and sound prediction on the Greatest Hits benchmark. Our method not only surpasses the state of the art, in four out of five benchmarks, but importantly, it is the only single method we know of that was successfully applied to such a diverse range of tasks.",
"title": ""
},
{
"docid": "28bb93f193b62cc1829cf082ffcea7f9",
"text": "Analyzing a user's first impression of a Web site is essential for interface designers, as it is tightly related to their overall opinion of a site. In fact, this early evaluation affects user navigation behavior. Perceived usability and user interest (e.g., revisiting and recommending the site) are parameters influenced by first opinions. Thus, predicting the latter when creating a Web site is vital to ensure users’ acceptance. In this regard, Web aesthetics is one of the most influential factors in this early perception. We propose the use of low-level image parameters for modeling Web aesthetics in an objective manner, which is an innovative research field. Our model, obtained by applying a stepwise multiple regression algorithm, infers a user's first impression by analyzing three different visual characteristics of Web site screenshots—texture, luminance, and color—which are directly derived from MPEG-7 descriptors. The results obtained over three wide Web site datasets (composed by 415, 42, and 6 Web sites, respectively) reveal a high correlation between low-level parameters and the users’ evaluation, thus allowing a more precise and objective prediction of users’ opinion than previous models that are based on other image characteristics with fewer predictors. Therefore, our model is meant to support a rapid assessment of Web sites in early stages of the design process to maximize the likelihood of the users’ final approval.",
"title": ""
},
{
"docid": "25c41bdba8c710b663cb9ad634b7ae5d",
"text": "Massive data streams are now fundamental to many data processing applications. For example, Internet routers produce large scale diagnostic data streams. Such streams are rarely stored in traditional databases, and instead must be processed “on the fly” as they are produced. Similarly, sensor networks produce multiple data streams of observations from their sensors. There is growing focus on manipulating data streams, and hence, there is a need to identify basic operations of interest in managing data streams, and to support them efficiently. We propose computation of the Hamming norm as a basic operation of interest. The Hamming norm formalises ideas that are used throughout data processing. When applied to a single stream, the Hamming norm gives the number of distinct items that are present in that data stream, which is a statistic of great interest in databases. When applied to a pair of streams, the Hamming norm gives an important measure of (dis)similarity: the number of unequal item counts in the two streams. Hamming norms have many uses in comparing data streams. We present a novel approximation technique for estimating the Hamming norm for massive data streams; this relies on what we call the “ l0 sketch” and we prove its accuracy. We test our approximation method on a large quantity of synthetic and real stream data, and show that the estimation is accurate to within a few percentage points. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 28th VLDB Conference, Hong Kong, China, 2002",
"title": ""
},
{
"docid": "288dc197e9be9b5289615b10eddbb987",
"text": "As biometric applications are fielded to serve large population groups, issues of performance differences between individual sub-groups are becoming increasingly important. In this paper we examine cases where we believe race is one such factor. We look in particular at two forms of problem; facial classification and image synthesis. We take the novel approach of considering race as a boundary for transfer learning in both the task (facial classification) and the domain (synthesis over distinct datasets). We demonstrate a series of techniques to improve transfer learning of facial classification; outperforming similar models trained in the target's own domain. We conduct a study to evaluate the performance drop of Generative Adversarial Networks trained to conduct image synthesis, in this process, we produce a new annotation for the Celeb-A dataset by race. These networks are trained solely on one race and tested on another - demonstrating the subsets of the CelebA to be distinct domains for this task.",
"title": ""
},
{
"docid": "8e74a27a3edea7cf0e88317851bc15eb",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "0daa16a3f40612946187d6c66ccd96f4",
"text": "A 60 GHz frequency band planar diplexer based on Substrate Integrated Waveguide (SIW) technology is presented in this research. The 5th order millimeter wave SIW filter is investigated first, and then the 60 GHz SIW diplexer is designed and been simulated. SIW-microstrip transitions are also included in the final design. The relative bandwidths of up and down channels are 1.67% and 1.6% at 59.8 GHz and 62.2 GHz respectively. Simulation shows good channel isolation, small return losses and moderate insertion losses in pass bands. The diplexer can be easily integrated in millimeter wave integrated circuits.",
"title": ""
}
] |
scidocsrr
|
e5c2105d8766b5c9ed70a2963beb54fa
|
Power Approaches for Implantable Medical Devices
|
[
{
"docid": "108058f1814d7520003b44f1ffc99cb5",
"text": "The process of acquiring the energy surrounding a system and converting it into usable electrical energy is termed power harvesting. In the last few years, there has been a surge of research in the area of power harvesting. This increase in research has been brought on by the modern advances in wireless technology and low-power electronics such as microelectromechanical systems. The advances have allowed numerous doors to open for power harvesting systems in practical real-world applications. The use of piezoelectric materials to capitalize on the ambient vibrations surrounding a system is one method that has seen a dramatic rise in use for power harvesting. Piezoelectric materials have a crystalline structure that provides them with the ability to transform mechanical strain energy into electrical charge and, vice versa, to convert an applied electrical potential into mechanical strain. This property provides these materials with the ability to absorb mechanical energy from their surroundings, usually ambient vibration, and transform it into electrical energy that can be used to power other devices. While piezoelectric materials are the major method of harvesting energy, other methods do exist; for example, one of the conventional methods is the use of electromagnetic devices. In this paper we discuss the research that has been performed in the area of power harvesting and the future goals that must be achieved for power harvesting systems to find their way into everyday use.",
"title": ""
}
] |
[
{
"docid": "f4401e483c519e1f2d33ee18ea23b8d7",
"text": "Cultivation of mindfulness, the nonjudgmental awareness of experiences in the present moment, produces beneficial effects on well-being and ameliorates psychiatric and stress-related symptoms. Mindfulness meditation has therefore increasingly been incorporated into psychotherapeutic interventions. Although the number of publications in the field has sharply increased over the last two decades, there is a paucity of theoretical reviews that integrate the existing literature into a comprehensive theoretical framework. In this article, we explore several components through which mindfulness meditation exerts its effects: (a) attention regulation, (b) body awareness, (c) emotion regulation (including reappraisal and exposure, extinction, and reconsolidation), and (d) change in perspective on the self. Recent empirical research, including practitioners' self-reports and experimental data, provides evidence supporting these mechanisms. Functional and structural neuroimaging studies have begun to explore the neuroscientific processes underlying these components. Evidence suggests that mindfulness practice is associated with neuroplastic changes in the anterior cingulate cortex, insula, temporo-parietal junction, fronto-limbic network, and default mode network structures. The authors suggest that the mechanisms described here work synergistically, establishing a process of enhanced self-regulation. Differentiating between these components seems useful to guide future basic research and to specifically target areas of development in the treatment of psychological disorders.",
"title": ""
},
{
"docid": "d5696c9118437b81dc1818ecd8f18741",
"text": "The contribution of this paper is to propose and experimentally validate an optimizing control strategy for power kites flying crosswind. The algorithm ensures the kite follows a reference path (control) and also periodically optimizes the reference path (efficiency optimization). The path-following part of the controller is capable of consistently following a reference path, despite significant time delays and wind variations, using position measurements only. The path-optimization part adjusts the reference path in order to maximize line tension. It uses a real-time optimization algorithm that combines off-line modeling knowledge and on-line measurements. The algorithm has been tested comprehensively on a small-scale prototype, and this paper focuses on experimental results.",
"title": ""
},
{
"docid": "e38aa8466226257ca85e3fe0e709edc9",
"text": "Recently, recurrent neural networks (RNNs) as powerful sequence models have re-emerged as a potential acoustic model for statistical parametric speech synthesis (SPSS). The long short-term memory (LSTM) architecture is particularly attractive because it addresses the vanishing gradient problem in standard RNNs, making them easier to train. Although recent studies have demonstrated that LSTMs can achieve significantly better performance on SPSS than deep feedforward neural networks, little is known about why. Here we attempt to answer two questions: a) why do LSTMs work well as a sequence model for SPSS; b) which component (e.g., input gate, output gate, forget gate) is most important. We present a visual analysis alongside a series of experiments, resulting in a proposal for a simplified architecture. The simplified architecture has significantly fewer parameters than an LSTM, thus reducing generation complexity considerably without degrading quality.",
"title": ""
},
{
"docid": "44665a3d2979031aca85010b9ad1ec90",
"text": "Studies in humans and non-human primates have provided evidence for storage of working memory contents in multiple regions ranging from sensory to parietal and prefrontal cortex. We discuss potential explanations for these distributed representations: (i) features in sensory regions versus prefrontal cortex differ in the level of abstractness and generalizability; and (ii) features in prefrontal cortex reflect representations that are transformed for guidance of upcoming behavioral actions. We propose that the propensity to produce persistent activity is a general feature of cortical networks. Future studies may have to shift focus from asking where working memory can be observed in the brain to how a range of specialized brain areas together transform sensory information into a delayed behavioral response.",
"title": ""
},
{
"docid": "c613138270b05f909904519d195fcecf",
"text": "This study deals with artificial neural network (ANN) modeling a diesel engine using waste cooking biodiesel fuel to predict the brake power, torque, specific fuel consumption and exhaust emissions of engine. To acquire data for training and testing the proposed ANN, two cylinders, four-stroke diesel engine was fuelled with waste vegetable cooking biodiesel and diesel fuel blends and operated at different engine speeds. The properties of biodiesel produced from waste vegetable oil was measured based on ASTM standards. The experimental results reveal that blends of waste vegetable oil methyl ester with diesel fuel provide better engine performance and improved emission characteristics. Using some of the experimental data for training, an ANN model based on standard Back-Propagation algorithm for the engine was developed. Multi layer perception network (MLP) was used for nonlinear mapping between the input and the output parameters. Different activation functions and several rules were used to assess the percentage error between the desired and the predicted values. It was observed that the ANN model can predict the engine performance and exhaust emissions quite well with correlation coefficient (R) were 0.9487, 0.999, 0.929 and 0.999 for the engine torque, SFC, CO and HC emissions, respectively. The prediction MSE (Mean Square Error) error was between the desired outputs as measured values and the simulated values by the model was obtained as 0.0004.",
"title": ""
},
{
"docid": "2e99e535f2605e88571407142e4927ee",
"text": "Stability is a common tool to verify the validity of sample based algorithms. In clustering it is widely used to tune the parameters of the algorithm, such as the number k of clusters. In spite of the popularity of stability in practical applications, there has been very little theoretical analysis of this notion. In this paper we provide a formal definition of stability and analyze some of its basic properties. Quite surprisingly, the conclusion of our analysis is that for large sample size, stability is fully determined by the behavior of the objective function which the clustering algorithm is aiming to minimize. If the objective function has a unique global minimizer, the algorithm is stable, otherwise it is unstable. In particular we conclude that stability is not a well-suited tool to determine the number of clusters it is determined by the symmetries of the data which may be unrelated to clustering parameters. We prove our results for center-based clusterings and for spectral clustering, and support our conclusions by many examples in which the behavior of stability is counter-intuitive.",
"title": ""
},
{
"docid": "cbe58b6e45c8716411f7854b0cf03d49",
"text": "Before a person suffering from a traumatic brain injury reaches a medical facility, measuring their pupillary light reflex (PLR) is one of the few quantitative measures a clinician can use to predict their outcome. We propose PupilScreen, a smartphone app and accompanying 3D-printed box that combines the repeatability, accuracy, and precision of a clinical device with the ubiquity and convenience of the penlight test that clinicians regularly use in emergency situations. The PupilScreen app stimulates the patient's eyes using the smartphone's flash and records the response using the camera. The PupilScreen box, akin to a head-mounted virtual reality display, controls the eyes' exposure to light. The recorded video is processed using convolutional neural networks that track the pupil diameter over time, allowing for the derivation of clinically relevant measures. We tested two different network architectures and found that a fully convolutional neural network was able to track pupil diameter with a median error of 0.30 mm. We also conducted a pilot clinical evaluation with six patients who had suffered a TBI and found that clinicians were almost perfect when separating unhealthy pupillary light reflexes from healthy ones using PupilScreen alone.",
"title": ""
},
{
"docid": "3dcf5f63798458ed697a23664675f2fe",
"text": "Volatility plays crucial roles in financial markets, such as in derivative pricing, portfolio risk management, and hedging strategies. Therefore, accurate prediction of volatility is critical. We propose a new hybrid long short-term memory (LSTM) model to forecast stock price volatility that combines the LSTM model with various generalized autoregressive conditional heteroscedasticity (GARCH)-type models. We use KOSPI 200 index data to discover proposed hybrid models that combine an LSTM with one to three GARCH-type models. In addition, we compare their performance with existing methodologies by analyzing single models, such as the GARCH, exponential GARCH, exponentially weighted moving average, a deep feedforward neural network (DFN), and the LSTM, as well as the hybrid DFN models combining a DFN with one GARCH-type model. Their performance is compared with that of the proposed hybrid LSTM models. We discover that GEW-LSTM, a proposed hybrid model combining the LSTM model with three GARCH-type models, has the lowest prediction errors in terms of mean absolute error (MAE), mean squared error (MSE), heteroscedasticity adjusted MAE (HMAE), and heteroscedasticity adjusted MSE (HMSE). The MAE of GEW-LSTM is 0.0107, which is 37.2% less than that of the E-DFN (0.017), the model combining EGARCH and DFN and the best model among those existing. In addition, the GEWLSTM has 57.3%, 24.7%, and 48% smaller MSE, HMAE, and HMSE, respectively. The first contribution of this study is its hybrid LSTM model that combines excellent sequential pattern learning with improved prediction performance in stock market volatility. Second, our proposed model markedly enhances prediction performance of the existing literature by combining a neural network model with multiple econometric models rather than only a single econometric model. Finally, the proposed methodology can be extended to various fields as an integrated model combining time-series and neural network models as well as forecasting stock market volatility.",
"title": ""
},
{
"docid": "9b85018faaa87dc6bf197ea1eee426e2",
"text": "Currently, a large number of industrial data, usually referred to big data, are collected from Internet of Things (IoT). Big data are typically heterogeneous, i.e., each object in big datasets is multimodal, posing a challenging issue on the convolutional neural network (CNN) that is one of the most representative deep learning models. In this paper, a deep convolutional computation model (DCCM) is proposed to learn hierarchical features of big data by using the tensor representation model to extend the CNN from the vector space to the tensor space. To make full use of the local features and topologies contained in the big data, a tensor convolution operation is defined to prevent overfitting and improve the training efficiency. Furthermore, a high-order backpropagation algorithm is proposed to train the parameters of the deep convolutional computational model in the high-order space. Finally, experiments on three datasets, i.e., CUAVE, SNAE2, and STL-10 are carried out to verify the performance of the DCCM. Experimental results show that the deep convolutional computation model can give higher classification accuracy than the deep computation model or the multimodal model for big data in IoT.",
"title": ""
},
{
"docid": "32598fba1f5e7507113d89ad1978e867",
"text": "Good motion data is costly to create. Such an expense often makes the reuse of motion data through transformation and retargetting a more attractive option than creating new motion from scratch. Reuse requires the ability to search automatically and efficiently a growing corpus of motion data, which remains a difficult open problem. We present a method for quickly searching long, unsegmented motion clips for subregions that most closely match a short query clip. Our search algorithm is based on a weighted PCA-based pose representation that allows for flexible and efficient pose-to-pose distance calculations. We present our pose representation and the details of the search algorithm. We evaluate the performance of a prototype search application using both synthetic and captured motion data. Using these results, we propose ways to improve the application's performance. The results inform a discussion of the algorithm's good scalability characteristics.",
"title": ""
},
{
"docid": "b5c8d34b75dbbfdeb666fd76ef524be7",
"text": "Systematic Literature Reviews (SLR) may not provide insight into the \"state of the practice\" in SE, as they do not typically include the \"grey\" (non-published) literature. A Multivocal Literature Review (MLR) is a form of a SLR which includes grey literature in addition to the published (formal) literature. Only a few MLRs have been published in SE so far. We aim at raising the awareness for MLRs in SE by addressing two research questions (RQs): (1) What types of knowledge are missed when a SLR does not include the multivocal literature in a SE field? and (2) What do we, as a community, gain when we include the multivocal literature and conduct MLRs? To answer these RQs, we sample a few example SLRs and MLRs and identify the missing and the gained knowledge due to excluding or including the grey literature. We find that (1) grey literature can give substantial benefits in certain areas of SE, and that (2) the inclusion of grey literature brings forward certain challenges as evidence in them is often experience and opinion based. Given these conflicting viewpoints, the authors are planning to prepare systematic guidelines for performing MLRs in SE.",
"title": ""
},
{
"docid": "72041ae7e06d3c35701726a6c878c081",
"text": "This paper presents a compression algorithm for color filter array (CFA) images in a wireless capsule endoscopy system. The proposed algorithm consists of a new color space transformation (known as YLMN), a raster-order prediction model, and a single context adaptive Golomb–Rice encoder to encode the residual signal with variable length coding. An optimum reversible color transformation derivation model is presented first, which incorporates a prediction model to find the optimum color transformation. After the color transformation, each color component has been independently encoded with a low complexity raster-order prediction model and Golomb–Rice encoder. The algorithm is implemented using a TSMC 65-nm CMOS process, which shows a reduction in gate count by 38.9% and memory requirement by 71.2% compared with existing methods. Performance assessment using CFA database shows the proposed design can outperform existing lossless and near-lossless compression algorithms by a large margin, which makes it suitable for capsule endoscopy application.",
"title": ""
},
{
"docid": "d199e473a8a22618c9040fd345254b10",
"text": "One of the first questions a researcher or designer of wearable technology has to answer in the design process is where on the body the device should be worn. It has been almost 20 years since Gemperle et al. wrote \"Design for Wearability\" [17], and although much of her initial guidelines on humans factors surrounding wearability still stand, devices and use cases have changed over time. This paper is a collection of literature and updated guidelines and reasons for on-body location depending on the use of the wearable technology and the affordances provided by different locations on the body.",
"title": ""
},
{
"docid": "047676c0e4ebf98a66136897b3e7658d",
"text": "Ever increasing numbers of users are surfing the net via an LTE access network. It has been predicted that three quarters (75%) of the world's mobile data traffic will be video-based by 2020. As the video space grows, media companies have been using adaptive bitrate technology for many years. The MPEG-DASH is the widely used HTTP-based adaptive streaming solution based on TCP. To address the bandwidth utilization issue, in this paper we investigate the ETSI Mobile Edge Computing (MEC) Intelligent Video Acceleration Service mechanism to improve video transmission over 4G LTE. Specifically, with the radio information provided by the MEC server, the video server can adaptively transmit just one copy of suitable bitrate data based on the throughput. We use OpenAirInterface (OAI), a 4G compliance emulator, as our testbed to implement and verify our proposed MEC-based adaptive video transmission mechanism. We design and implement a “filter” module in the Medium Access Control (MAC) layer over the OAI platform, which can reference Channel Quality Indicator (CQI) as a parameter to decide which bitrate type of data should be passed to the MAC Packet Scheduler for real resource allocation. By using constant-bitrate as our control groups, the testbed indicates that, consistent with our expectations, our mechanism can significantly benefit from lower latency with a reasonable throughput.",
"title": ""
},
{
"docid": "7aa5bf782622f2f0247dce09dcb23077",
"text": "In the wake of the digital revolution we will see a dramatic transformation of our economy and societal institutions. While the benefits of this transformation can be massive, there are also tremendous risks. The fundaments of autonomous decision-making, human dignity, and democracy are shaking. After the automation of production processes and vehicle operation, the automation of society is next. This is moving us to a crossroads: we must decide between a society in which the actions are determined in a top-down way and then implemented by coercion or manipulative technologies or a society, in which decisions are taken in a free and participatory way. Modern information and communication systems enable both, but the latter has economic and strategic benefits.",
"title": ""
},
{
"docid": "d8f21e77a60852ea83f4ebf74da3bcd0",
"text": "In recent years different lines of evidence have led to the idea that motor actions and movements in both vertebrates and invertebrates are composed of elementary building blocks. The entire motor repertoire can be spanned by applying a well-defined set of operations and transformations to these primitives and by combining them in many different ways according to well-defined syntactic rules. Motor and movement primitives and modules might exist at the neural, dynamic and kinematic levels with complicated mapping among the elementary building blocks subserving these different levels of representation. Hence, while considerable progress has been made in recent years in unravelling the nature of these primitives, new experimental, computational and conceptual approaches are needed to further advance our understanding of motor compositionality.",
"title": ""
},
{
"docid": "13cb793ca9cdf926da86bb6fc630800a",
"text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.",
"title": ""
},
{
"docid": "6dc6d3be0cbfd280efc81adef6182d0d",
"text": "This paper aims to trace the development of management accounting systems (MAS) in a Portuguese bank, where an activity based costing system (ABC) is being trialled for implementation, as a means to improving the economy, efficiency and effectiveness of employee activity. The culture of banking in Portugal has changed significantly over the last 25 years, but at the same time there are older traditions which remain powerful. It will therefore be significant to study how an imported MAS like ABC is developed and disseminated within a Portuguese banking context. The research can be classified as a longitudinal study of organisational change using a single case study. It draws on Morgan and Sturdy’s (2000) critical framework for exploring change through three lenses – changing structures, changing discourses and the effect of both these processes on power and inequality. The study provides new insights into how management accounting practices, along with other organisational systems, play an important role questioning, visualising, analysing, and measuring implemented strategies. These practices have an important influence on strategic decision-making, and help legitimate action. As the language and practice of management have shifted towards strategy and marketing discourses, patterns of work, organisation and career are being restructured.",
"title": ""
},
{
"docid": "781ef0722d8a03024924a556aa1dc61e",
"text": "Digital 3D mosaics generation is a current trend of NPR (Non Photorealistic Rendering) field; in this demo we present an interactive system realized in JAVA where the user can simulate ancient mosaic in a 3D environment starting for any input image. Different simulation engines able to render the so-called \"Opus Musivum\"and \"Opus Vermiculatum\" are employed. Different parameters can be dynamically adjusted to obtain very impressive results.",
"title": ""
}
] |
scidocsrr
|
d13720fe665e0eb6cf95764384e43e02
|
Auxiliary Multimodal LSTM for Audio-visual Speech Recognition and Lipreading
|
[
{
"docid": "80c44d61e019f6858326fb9c5753c700",
"text": "This paper develops an Audio-Visual Speech Recognition (AVSR) method, by (1) exploring high-performance visual features, (2) applying audio and visual deep bottleneck features to improve AVSR performance, and (3) investigating effectiveness of voice activity detection in a visual modality. In our approach, many kinds of visual features are incorporated, subsequently converted into bottleneck features by deep learning technology. By using proposed features, we successfully achieved 73.66% lipreading accuracy in speaker-independent open condition, and about 90% AVSR accuracy on average in noisy environments. In addition, we extracted speech segments from visual features, resulting 77.80% lipreading accuracy. It is found VAD is useful in both audio and visual modalities, for better lipreading and AVSR.",
"title": ""
}
] |
[
{
"docid": "160aef1e152fcf761a813dcabc33f1d4",
"text": "Three different samples (total N = 485) participated in the development and refinement ofthe Leadership Scale for Sports (LSS). A five-factor solution with 40 items describing the most salient dimensions of coaching behavior was selected as the most meaningful. These factors were named Training and Instruction, Democratic Behavior, Autocratic Behavior, Social Support, and Positive Feedback. Internal consistency estimates ranged from .45 to .93 and the test-retest reliability coefficients ranged from .71 to .82. The relative stability of the factor structure across the different samples confirmed the factorial validity ofthe scale. The interpretation ofthe factors established the content validity of the scale. Finally, possible uses of the LSS were pointed out.",
"title": ""
},
{
"docid": "0f45452e8c9ca8aaf501e7e89685746b",
"text": "Chatbots are programs that mimic human conversation using Artificial Intelligence (AI). It is designed to be the ultimate virtual assistant, entertainment purpose, helping one to complete tasks ranging from answering questions, getting driving directions, turning up the thermostat in smart home, to playing one's favorite tunes etc. Chatbot has become more popular in business groups right now as they can reduce customer service cost and handles multiple users at a time. But yet to accomplish many tasks there is need to make chatbots as efficient as possible. To address this problem, in this paper we provide the design of a chatbot, which provides an efficient and accurate answer for any query based on the dataset of FAQs using Artificial Intelligence Markup Language (AIML) and Latent Semantic Analysis (LSA). Template based and general questions like welcome/ greetings and general questions will be responded using AIML and other service based questions uses LSA to provide responses at any time that will serve user satisfaction. This chatbot can be used by any University to answer FAQs to curious students in an interactive fashion.",
"title": ""
},
{
"docid": "c8f8df96501d0786262ebb7630c73594",
"text": "In this paper, a grouping genetic algorithm based approach is proposed for dividing stocks into groups and mining a set of stock portfolios, namely group stock portfolio. Each chromosome consists of three parts. Grouping and stock parts are used to indicate how to divide stocks into groups. Stock portfolio part is used to represent the purchased stocks and their purchased units. The fitness of each chromosome is evaluated by the group balance and the portfolio satisfaction. The group balance is utilized to make the groups represented by the chromosome have as similar number of stocks as possible. The portfolio satisfaction is used to evaluate the goodness of profits and satisfaction of investor's requests of all possible portfolio combinations that can generate from a chromosome. Experiments on a real data were also made to show the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "7681a78f2d240afc6b2e48affa0612c1",
"text": "Web usage mining applies data mining procedures to analyze user access of Web sites. As with any KDD (knowledge discovery and data mining) process, WUM contains three main steps: preprocessing, knowledge extraction, and results analysis. We focus on data preprocessing, a fastidious, complex process. Analysts aim to determine the exact list of users who accessed the Web site and to reconstitute user sessions-the sequence of actions each user performed on the Web site. Intersites WUM deals with Web server logs from several Web sites, generally belonging to the same organization. Thus, analysts must reassemble the users' path through all the different Web servers that they visited. Our solution is to join all the log files and reconstitute the visit. Classical data preprocessing involves three steps: data fusion, data cleaning, and data structuration. Our solution for WUM adds what we call advanced data preprocessing. This consists of a data summarization step, which will allow the analyst to select only the information of interest. We've successfully tested our solution in an experiment with log files from INRIA Web sites.",
"title": ""
},
{
"docid": "9ebaed9fa1ceb1038924055b768f23d2",
"text": "We provide information on the Survivable Network Design Library (SNDlib), a data library for fixed telecommunication network design that can be accessed at http://sndlib.zib.de. In version 1.0, the library contains data related to 22 networks which, combined with a set of selected planning parameters, leads to 830 network planning problem instances. In this paper, we provide a mathematical model for each planning problem considered in the library and describe the data concepts of the SNDlib. Furthermore, we provide statistical information and details about the origin of the data sets.",
"title": ""
},
{
"docid": "4e411bc18c6dfa74ffc6ab28228bdebf",
"text": "If a single deontic reasoning system handles deontic social contracts and deontic precautions, then performance on both should be impaired by neural damage to that system. But if they are handled by two distinct neurocognitive systems, as predicted by SCT and HMT, then neural trauma could cause a dissociation. Buller’s hypothesis fails again: focal brain damage can selectively impair social contract reasoning while leaving precautionary reasoning intact [5]. This dissociation within the domain of deontic rules has recently been replicated using neuroimaging [6]. Interpretations of social contract rules track SCT’s domain-specialized inference procedures: in [8], we refuted Buller-style logic explanations of social contract results for perspective change, switched rules, and ‘wants’ problems – facts he fails to mention, let alone discuss. Buller’s systematic inattention to large bodies of findings that conflict with his assertions is not due to lack of space in TICS – the pretence that these findings do not exist pervades his book, and its treatment of many areas of evolutionary psychology. (For further analysis, see www. psych.ucsb.edu/research/cep/buller.htm)",
"title": ""
},
{
"docid": "66d6f514c6bce09110780a1130b64dfe",
"text": "Today, with more competiveness of industries, markets, and working atmosphere in productive and service organizations what is very important for maintaining clients present, for attracting new clients and as a result increasing growth of success in organizations is having a suitable relation with clients. Bank is among organizations which are not an exception. Especially, at the moment according to increasing rate of banks` privatization, it can be argued that significance of attracting clients for banks is more than every time. The article tries to investigate effect of CRM on marketing performance in banking industry. The research method is applied and survey and descriptive. Statistical community of the research is 5 branches from Mellat Banks across Khoramabad Province and their clients. There are 45 personnel in this branch and according to Morgan Table the sample size was 40 people. Clients example was considered according to collected information, one questionnaire was designed for bank organization and another one was prepared for banks` clients in which reliability and validity are approved. The research result indicates that CRM is ineffective on marketing performance.",
"title": ""
},
{
"docid": "91e38df08894f59e134f83ae532b09e7",
"text": "Many functional network properties of the human brain have been identified during rest and task states, yet it remains unclear how the two relate. We identified a whole-brain network architecture present across dozens of task states that was highly similar to the resting-state network architecture. The most frequent functional connectivity strengths across tasks closely matched the strengths observed at rest, suggesting this is an \"intrinsic,\" standard architecture of functional brain organization. Furthermore, a set of small but consistent changes common across tasks suggests the existence of a task-general network architecture distinguishing task states from rest. These results indicate the brain's functional network architecture during task performance is shaped primarily by an intrinsic network architecture that is also present during rest, and secondarily by evoked task-general and task-specific network changes. This establishes a strong relationship between resting-state functional connectivity and task-evoked functional connectivity-areas of neuroscientific inquiry typically considered separately.",
"title": ""
},
{
"docid": "11707c7f7c5b028392b25d1dffa9daeb",
"text": "High reliability and large rangeability are required of pumps in existing and new plants which must be capable of reliable on-off cycling operations and specially low load duties. The reliability and rangeability target is a new task for the pump designer/researcher and is made very challenging by the cavitation and/or suction recirculation effects, first of all the pump damage. The present knowledge about the: a) design critical parameters and their optimization, b) field problems diagnosis and troubleshooting has much advanced, in the very latest years. The objective of the pump manufacturer is to develop design solutions and troubleshooting approaches which improve the impeller life as related to cavitation erosion and enlarge the reliable operating range by minimizing the effects of the suction recirculation. This paper gives a short description of several field cases characterized by different damage patterns and other symptoms related with cavitation and/or suction recirculation. The troubleshooting methodology is described in detail, also focusing on the role of both the pump designer and the pump user.",
"title": ""
},
{
"docid": "83797d5698f6962141744f591d946fa5",
"text": "In this paper, an S-band internally harmonic matched GaN FET is presented, which is designed so that up to third harmonic impedance is tuned to high efficiency condition. Harmonic load pull measurements were done for a small transistor cell at first. It was found that power added efficiency (PAE) of 78% together with 6W output power can be obtained by tuning impedances up to 3rd harmonic for both input and output sides. Then matching circuit was designed for large gate periphery multi-cell transistor. To make the circuit size small, harmonic matching was done in a hermetically sealed package. With total gate width of 64mm, 330W output power and 62% PAE was successfully obtained.",
"title": ""
},
{
"docid": "e5ce1ddd50a728fab41043324938a554",
"text": "B-trees are used by many file systems to represent files and directories. They provide guaranteed logarithmic time key-search, insert, and remove. File systems like WAFL and ZFS use shadowing, or copy-on-write, to implement snapshots, crash recovery, write-batching, and RAID. Serious difficulties arise when trying to use b-trees and shadowing in a single system.\n This article is about a set of b-tree algorithms that respects shadowing, achieves good concurrency, and implements cloning (writeable snapshots). Our cloning algorithm is efficient and allows the creation of a large number of clones.\n We believe that using our b-trees would allow shadowing file systems to better scale their on-disk data structures.",
"title": ""
},
{
"docid": "23def38b89358bc1090412e127c7ec2b",
"text": "We describe the design of four ornithopters ranging in wing span from 10 cm to 40 cm, and in weight from 5 g to 45 g. The controllability and power supply are two major considerations, so we compare the efficiency and characteristics between different types of subsystems such as gearbox and tail shape. Our current ornithopter is radio-controlled with inbuilt visual sensing and capable of takeoff and landing. We also concentrate on its wing efficiency based on design inspired by a real insect wing and consider that aspects of insect flight such as delayed stall and wake capture are essential at such small size. Most importantly, the advance ratio, controlled either by enlarging the wing beat amplitude or raising the wing beat frequency, is the most significant factor in an ornithopter which mimics an insect.",
"title": ""
},
{
"docid": "42bb77e398f19f3be69c3852a597aa33",
"text": "The recently released Rodinia benchmark suite enables users to evaluate heterogeneous systems including both accelerators, such as GPUs, and multicore CPUs. As Rodinia sees higher levels of acceptance, it becomes important that researchers understand this new set of benchmarks, especially in how they differ from previous work. In this paper, we present recent extensions to Rodinia and conduct a detailed characterization of the Rodinia benchmarks (including performance results on an NVIDIA GeForce GTX480, the first product released based on the Fermi architecture). We also compare and contrast Rodinia with Parsec to gain insights into the similarities and differences of the two benchmark collections; we apply principal component analysis to analyze the application space coverage of the two suites. Our analysis shows that many of the workloads in Rodinia and Parsec are complementary, capturing different aspects of certain performance metrics.",
"title": ""
},
{
"docid": "b367b94f7ad3142677a27de9756f340b",
"text": "Previous investigations of strength have only focused on biomechanical or psychological determinants, while ignoring the potential interplay and relative contributions of these variables. The purpose of this study was to investigate the relative contributions of biomechanical, anthropometric, and psychological variables to the prediction of maximum parallel barbell back squat strength. Twenty-one college-aged participants (male = 14; female = 7; age = 23 ± 3 years) reported to the laboratory for two visits. The first visit consisted of anthropometric, psychometric, and parallel barbell back squat one-repetition maximum (1RM) testing. On the second visit, participants performed isometric dynamometry testing for the knee, hip, and spinal extensors in a sticking point position-specific manner. Multiple linear regression and correlations were used to investigate the combined and individual relationships between biomechanical, anthropometric, and psychological variables and squat 1RM. Multiple regression revealed only one statistically predictive determinant: fat free mass normalized to height (standardized estimate ± SE = 0.6 ± 0.3; t(16) = 2.28; p = 0.037). Correlation coefficients for individual variables and squat 1RM ranged from r = -0.79-0.83, with biomechanical, anthropometric, experiential, and sex predictors showing the strongest relationships, and psychological variables displaying the weakest relationships. These data suggest that back squat strength in a heterogeneous population is multifactorial and more related to physical rather than psychological variables.",
"title": ""
},
{
"docid": "b79b3497ae4987e00129eab9745e1398",
"text": "The automata-theoretic approach to linear temporal logic uses the theory of automata as a unifying paradigm for program specification, verification, and synthesis. Both programs and specifications are in essence descriptions of computations. These computations can be viewed as words over some alphabet. Thus,programs and specificationscan be viewed as descriptions of languagesover some alphabet. The automata-theoretic perspective considers the relationships between programs and their specifications as relationships between languages.By translating programs and specifications to automata, questions about programs and their specifications can be reduced to questions about automata. More specifically, questions such as satisfiability of specifications and correctness of programs with respect to their specifications can be reduced to questions such as nonemptiness and containment of automata. Unlike classical automata theory, which focused on automata on finite words, the applications to program specification, verification, and synthesis, use automata on infinite words, since the computations in which we are interested are typically infinite. This paper provides an introduction to the theory of automata on infinite words and demonstrates its applications to program specification, verification, and synthesis.",
"title": ""
},
{
"docid": "1831e2a5a75fc85299588323d68947b2",
"text": "The Transaction Processing Performance Council (TPC) is completing development of TPC-DS, a new generation industry standard decision support benchmark. The TPC-DS benchmark, first introduced in the “The Making of TPC-DS” [9] paper at the 32 International Conference on Very Large Data Bases (VLDB), has now entered the TPC’s “Formal Review” phase for new benchmarks; companies and researchers alike can now download the draft benchmark specification and tools for evaluation. The first paper [9] gave an overview of the TPC-DS data model, workload model, and execution rules. This paper details the characteristics of different phases of the workload, namely: database load, query workload and data maintenance; and also their impact to the benchmark’s performance metric. As with prior TPC benchmarks, this workload will be widely used by vendors to demonstrate their capabilities to support complex decision support systems, by customers as a key factor in purchasing servers and software, and by the database community for research and development of optimization techniques.",
"title": ""
},
{
"docid": "0c025ec05a1f98d71c9db5bfded0a607",
"text": "Many organizations, such as banks, airlines, telecommunications companies, and police departments, routinely use queueing models to help determine capacity levels needed to respond to experienced demands in a timely fashion. Though queueing analysis has been used in hospitals and other healthcare settings, its use in this sector is not widespread. Yet, given the pervasiveness of delays in healthcare and the fact that many healthcare facilities are trying to meet increasing demands with tightly constrained resources, queueing models can be very useful in developing more effective policies for bed allocation and staffing, and in identifying other opportunities for improving service. Queueing analysis is also a key tool in estimating capacity requirements for possible future scenarios, including demand surges due to new diseases or acts of terrorism. This chapter describes basic queueing models as well as some simple modifications and extensions that are particularly useful in the healthcare setting, and give examples of their use. The critical issue of data requirements is also be discussed as well as model choice, modelbuilding and the interpretation and use of results.",
"title": ""
},
{
"docid": "92993ce699e720568d2e1b12a605bc3e",
"text": "Techniques for violent scene detection and affective impact prediction in videos can be deployed in many applications. In MediaEval 2015, we explore deep learning methods to tackle this challenging problem. Our system consists of several deep learning features. First, we train a Convolutional Neural Network (CNN) model with a subset of ImageNet classes selected particularly for violence detection. Second, we adopt a specially designed two-stream CNN framework [1] to extract features on both static frames and motion optical flows. Third, Long Short Term Memory (LSTM) models are applied on top of the two-stream CNN features, which can capture the longer-term temporal dynamics. In addition, several conventional motion and audio features are also extracted as complementary information to the deep learning features. By fusing all the advanced features, we achieve a mean average precision of 0.296 in the violence detection subtask, and an accuracy of 0.418 and 0.488 for arousal and valence respectively in the induced affect detection subtask. 1. SYSTEM DESCRIPTION Figure 1 gives an overview of our system. In this short paper, we briefly describe each of the key components. For more information about the task definitions, interested readers may refer to [2].",
"title": ""
},
{
"docid": "ec0928debb2a2b8a2f1f3ef56b6ff4ea",
"text": "The fifth generation (5G) of mobile broadband shall be a far more complex system compared to earlier generations due to advancements in radio and network technology, increased densification and heterogeneity of network and user equipment, larger number of operating bands, as well as more stringent performance requirement. To cope with the increased complexity of the Radio Resources Management (RRM) of 5G systems, this manuscript advocates the need for a clean slate design of the 5G RRM architecture. We propose to capitalize the large amount of data readily available in the network from measurements and system observations in combination with the most recent advances in the field of machine learning. The result is an RRM architecture based on general-purpose learning framework capable of deriving specific RRM control policies directly from data gathered in the network. The potential of this approach is verified in three case studies and future directions on application of machine learning to RRM are discussed.",
"title": ""
}
] |
scidocsrr
|
ed14cb435d8b344a6ede92e8cc07c96d
|
Automated Tongue Feature Extraction for ZHENG Classification in Traditional Chinese Medicine
|
[
{
"docid": "9d3fde77e0a47d11c7cf9b408abe6cc9",
"text": "Traditional Chinese Medicine (TCM) has a long history and has been recognized as a popular alternative medicine in western countries. Tongue diagnosis is a significant procedure in computer-aided TCM, where tongue image analysis plays a dominant role. In this paper, we proposed a fully automatic tongue detection and tongue segmentation framework, which is an essential step in computer-aided tongue image analysis. Comparing with other existing methods, our method is fully automatic without any need of adjusting parameters for different images and do not need any initialization.",
"title": ""
},
{
"docid": "c90b05657b7673257db617b62d0ed80c",
"text": "Automated tongue image segmentation, in Chinese medicine, is difficult due to two special factors: 1) there are many pathological details on the surface of the tongue, which have a large influence on edge extraction; 2) the shapes of the tongue bodies captured from various persons (with different diseases) are quite different, so they are impossible to describe properly using a predefined deformable template. To address these problems, in this paper, we propose an original technique that is based on a combination of a bi-elliptical deformable template (BEDT) and an active contour model, namely the bi-elliptical deformable contour (BEDC). The BEDT captures gross shape features by using the steepest decent method on its energy function in the parameter space. The BEDC is derived from the BEDT by substituting template forces for classical internal forces, and can deform to fit local details. Our algorithm features fully automatic interpretation of tongue images and a consistent combination of global and local controls via the template force. We apply the BEDC to a large set of clinical tongue images and present experimental results.",
"title": ""
}
] |
[
{
"docid": "55f356248f8ebf2174636cbedeaceaf3",
"text": "In in this paper, we propose a new model regarding foreground and shadow detection in video sequences. The model works without detailed a priori object-shape information, and it is also appropriate for low and unstable frame rate video sources. Contribution is presented in three key issues: 1) we propose a novel adaptive shadow model, and show the improvements versus previous approaches in scenes with difficult lighting and coloring effects; 2) we give a novel description for the foreground based on spatial statistics of the neighboring pixel values, which enhances the detection of background or shadow-colored object parts; 3) we show how microstructure analysis can be used in the proposed framework as additional feature components improving the results. Finally, a Markov random field model is used to enhance the accuracy of the separation. We validate our method on outdoor and indoor sequences including real surveillance videos and well-known benchmark test sets.",
"title": ""
},
{
"docid": "4ee078123815eff49cc5d43550021261",
"text": "Generalized anxiety and major depression have become increasingly common in the United States, affecting 18.6 percent of the adult population. Mood disorders can be debilitating, and are often correlated with poor general health, life dissatisfaction, and the need for disability benefits due to inability to work. Recent evidence suggests that some mood disorders have a circadian component, and disruptions in circadian rhythms may even trigger the development of these disorders. However, the molecular mechanisms of this interaction are not well understood. Polymorphisms in a circadian clock-related gene, PER3, are associated with behavioral phenotypes (extreme diurnal preference in arousal and activity) and sleep/mood disorders, including seasonal affective disorder (SAD). Here we show that two PER3 mutations, a variable number tandem repeat (VNTR) allele and a single-nucleotide polymorphism (SNP), are associated with diurnal preference and higher Trait-Anxiety scores, supporting a role for PER3 in mood modulation. In addition, we explore a potential mechanism for how PER3 influences mood by utilizing a comprehensive circadian clock model that accurately predicts the changes in circadian period evident in knock-out phenotypes and individuals with PER3-related clock disorders.",
"title": ""
},
{
"docid": "65385d7aee49806476dc913f6768fc43",
"text": "Software developers spend a significant portion of their resources handling user-submitted bug reports. For software that is widely deployed, the number of bug reports typically outstrips the resources available to triage them. As a result, some reports may be dealt with too slowly or not at all. \n We present a descriptive model of bug report quality based on a statistical analysis of surface features of over 27,000 publicly available bug reports for the Mozilla Firefox project. The model predicts whether a bug report is triaged within a given amount of time. Our analysis of this model has implications for bug reporting systems and suggests features that should be emphasized when composing bug reports. \n We evaluate our model empirically based on its hypothetical performance as an automatic filter of incoming bug reports. Our results show that our model performs significantly better than chance in terms of precision and recall. In addition, we show that our modelcan reduce the overall cost of software maintenance in a setting where the average cost of addressing a bug report is more than 2% of the cost of ignoring an important bug report.",
"title": ""
},
{
"docid": "099f8791628965844b96602aebed90f8",
"text": "Termites have colonized many habitats and are among the most abundant animals in tropical ecosystems, which they modify considerably through their actions. The timing of their rise in abundance and of the dispersal events that gave rise to modern termite lineages is not well understood. To shed light on termite origins and diversification, we sequenced the mitochondrial genome of 48 termite species and combined them with 18 previously sequenced termite mitochondrial genomes for phylogenetic and molecular clock analyses using multiple fossil calibrations. The 66 genomes represent most major clades of termites. Unlike previous phylogenetic studies based on fewer molecular data, our phylogenetic tree is fully resolved for the lower termites. The phylogenetic positions of Macrotermitinae and Apicotermitinae are also resolved as the basal groups in the higher termites, but in the crown termitid groups, including Termitinae + Syntermitinae + Nasutitermitinae + Cubitermitinae, the position of some nodes remains uncertain. Our molecular clock tree indicates that the lineages leading to termites and Cryptocercus roaches diverged 170 Ma (153-196 Ma 95% confidence interval [CI]), that modern Termitidae arose 54 Ma (46-66 Ma 95% CI), and that the crown termitid group arose 40 Ma (35-49 Ma 95% CI). This indicates that the distribution of basal termite clades was influenced by the final stages of the breakup of Pangaea. Our inference of ancestral geographic ranges shows that the Termitidae, which includes more than 75% of extant termite species, most likely originated in Africa or Asia, and acquired their pantropical distribution after a series of dispersal and subsequent diversification events.",
"title": ""
},
{
"docid": "20a484c01402cdc464cf0b46e577686e",
"text": "Healthcare costs have increased dramatically and the demand for highquality care will only grow in our aging society. At the same time,more event data are being collected about care processes. Healthcare Information Systems (HIS) have hundreds of tables with patient-related event data. Therefore, it is quite natural to exploit these data to improve care processes while reducing costs. Data science techniqueswill play a crucial role in this endeavor. Processmining can be used to improve compliance and performance while reducing costs. The chapter sets the scene for process mining in healthcare, thus serving as an introduction to this SpringerBrief.",
"title": ""
},
{
"docid": "3aadfd9d063eeddc09fbd86c82f2bfe4",
"text": "We study the probabilistic generative models parameterized by feedfor-ward neural networks. An attractor dynamics for probabilistic inference in these models is derived from a mean field approximation for large, layered sigmoidal networks. Fixed points of the dynamics correspond to solutions of the mean field equations, which relate the statistics of each unittothoseofits Markovblanket. We establish global convergence of the dynamics by providing a Lyapunov function and show that the dynamics generate the signals required for unsupervised learning. Our results for feedforward networks provide a counterpart to those of Cohen-Grossberg and Hopfield for symmetric networks.",
"title": ""
},
{
"docid": "0c6403b9486b5f44a735192edd807deb",
"text": "Prior to the start of cross-sex hormone therapy (CSH), androgenic progestins are often used to induce amenorrhea in female to male (FtM) pubertal adolescents with gender dysphoria (GD). The aim of this single-center study is to report changes in anthropometry, side effects, safety parameters, and hormone levels in a relatively large cohort of FtM adolescents with a diagnosis of GD at Tanner stage B4 or further, who were treated with lynestrenol (Orgametril®) monotherapy and in combination with testosterone esters (Sustanon®). A retrospective analysis of clinical and biochemical data obtained during at least 6 months of hormonal treatment in FtM adolescents followed at our adolescent gender clinic since 2010 (n = 45) was conducted. McNemar’s test to analyze reported side effects over time was performed. A paired Student’s t test or a Wilcoxon signed-ranks test was performed, as appropriate, on anthropometric and biochemical data. For biochemical analyses, all statistical tests were done in comparison with baseline parameters. Patients who were using oral contraceptives (OC) at intake were excluded if a Mann-Whitney U test indicated influence of OC. Metrorrhagia and acne were most pronounced during the first months of monotherapy and combination therapy respectively and decreased thereafter. Headaches, hot flushes, and fatigue were the most reported side effects. Over the course of treatment, an increase in musculature, hemoglobin, hematocrit, creatinine, and liver enzymes was seen, progressively sliding into male reference ranges. Lipid metabolism shifted to an unfavorable high-density lipoprotein (HDL)/low-density lipoprotein (LDL) ratio; glucose metabolism was not affected. Sex hormone-binding globulin (SHBG), total testosterone, and estradiol levels decreased, and free testosterone slightly increased during monotherapy; total and free testosterone increased significantly during combination therapy. Gonadotropins were only fully suppressed during combination therapy. Anti-Müllerian hormone (AMH) remained stable throughout the treatment. Changes occurred in the first 6 months of treatment and remained mostly stable thereafter. Treatment of FtM gender dysphoric adolescents with lynestrenol monotherapy and in combination with testosterone esters is effective, safe, and inexpensive; however, suppression of gonadotropins is incomplete. Regular blood controls allow screening for unphysiological changes in safety parameters or hormonal levels and for medication abuse.",
"title": ""
},
{
"docid": "d09d9d9f74079981f8f09e829e2af255",
"text": "Determination of sensitive and specific markers of very early AD progression is intended to aid researchers and clinicians to develop new treatments and monitor their effectiveness, as well as to lessen the time and cost of clinical trials. Magnetic Resonance (MR)-related biomarkers have been recently identified by the use of machine learning methods for the in vivo differential diagnosis of AD. However, the vast majority of neuroimaging papers investigating this topic are focused on the difference between AD and patients with mild cognitive impairment (MCI), not considering the impact of MCI patients who will (MCIc) or not convert (MCInc) to AD. Morphological T1-weighted MRIs of 137 AD, 76 MCIc, 134 MCInc, and 162 healthy controls (CN) selected from the Alzheimer's disease neuroimaging initiative (ADNI) cohort, were used by an optimized machine learning algorithm. Voxels influencing the classification between these AD-related pre-clinical phases involved hippocampus, entorhinal cortex, basal ganglia, gyrus rectus, precuneus, and cerebellum, all critical regions known to be strongly involved in the pathophysiological mechanisms of AD. Classification accuracy was 76% AD vs. CN, 72% MCIc vs. CN, 66% MCIc vs. MCInc (nested 20-fold cross validation). Our data encourage the application of computer-based diagnosis in clinical practice of AD opening new prospective in the early management of AD patients.",
"title": ""
},
{
"docid": "af4700eadf29386c5623097508ab523d",
"text": "Starting from the premise that working memory is a system for providing access to representations for complex cognition, six requirements for a working memory system are delineated: (1) maintaining structural representations by dynamic bindings, (2) manipulating structural representations, (3) flexible reconfiguration, (4) partial decoupling from long-term memory, (5) controlled retrieval from long-term memory, and (6) encoding of new structures into longterm memory. The chapter proposes an architecture for a system that meets these requirements. The working memory system consists of a declarative and a procedural part, each of which has three embedded components: the activated part of long-term memory, a component for creating new structural representations by dynamic bindings (the ‘‘region of direct access’’ for declarative working memory, and the ‘‘bridge’’ for procedural working memory), and a mechanism for selecting a single element (‘‘focus of attention’’ for declarative working memory, and ‘‘response focus’’ for procedural working memory). The architecture affords two modes of information processing, an analytical and an associative mode. This distinction provides a theoretically founded formulation of a dual-process theory of reasoning. DOI: https://doi.org/10.1016/S0079-7421(09)51002-X Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-28472 Originally published at: Oberauer, Klaus (2009). Design for a Working Memory. Psychology of Learning and Motivation, 51:45100. DOI: https://doi.org/10.1016/S0079-7421(09)51002-X",
"title": ""
},
{
"docid": "4f4817fd70f62b15c0b52311fa677a64",
"text": "Active plasmonics is a burgeoning and challenging subfield of plasmonics. It exploits the active control of surface plasmon resonance. In this review, a first-ever in-depth description of the theoretical relationship between surface plasmon resonance and its affecting factors, which forms the basis for active plasmon control, will be presented. Three categories of active plasmonic structures, consisting of plasmonic structures in tunable dielectric surroundings, plasmonic structures with tunable gap distances, and self-tunable plasmonic structures, will be proposed in terms of the modulation mechanism. The recent advances and current challenges for these three categories of active plasmonic structures will be discussed in detail. The flourishing development of active plasmonic structures opens access to new application fields. A significant part of this review will be devoted to the applications of active plasmonic structures in plasmonic sensing, tunable surface-enhanced Raman scattering, active plasmonic components, and electrochromic smart windows. This review will be concluded with a section on the future challenges and prospects for active plasmonics.",
"title": ""
},
{
"docid": "4e7122172cb7c37416381c251b510948",
"text": "Anatomic and physiologic data are used to analyze the energy expenditure on different components of excitatory signaling in the grey matter of rodent brain. Action potentials and postsynaptic effects of glutamate are predicted to consume much of the energy (47% and 34%, respectively), with the resting potential consuming a smaller amount (13%), and glutamate recycling using only 3%. Energy usage depends strongly on action potential rate--an increase in activity of 1 action potential/cortical neuron/s will raise oxygen consumption by 145 mL/100 g grey matter/h. The energy expended on signaling is a large fraction of the total energy used by the brain; this favors the use of energy efficient neural codes and wiring patterns. Our estimates of energy usage predict the use of distributed codes, with <or=15% of neurons simultaneously active, to reduce energy consumption and allow greater computing power from a fixed number of neurons. Functional magnetic resonance imaging signals are likely to be dominated by changes in energy usage associated with synaptic currents and action potential propagation.",
"title": ""
},
{
"docid": "f6e73a3c09d7eac1ddd60843b82df7b0",
"text": "This paper presents text and data mining in tandem to detect the phishing email. The study employs Multilayer Perceptron (MLP), Decision Trees (DT), Support Vector Machine (SVM), Group Method of Data Handling (GMDH), Probabilistic Neural Net (PNN), Genetic Programming (GP) and Logistic Regression (LR) for classification. A dataset of 2500 phishing and non phishing emails is analyzed after extracting 23 keywords from the email bodies using text mining from the original dataset. Further, we selected 12 most important features using t-statistic based feature selection. Here, we did not find statistically significant difference in sensitivity as indicated by t-test at 1% level of significance, both with and without feature selection across all techniques except PNN. Since, the GP and DT are not statistically significantly different either with or without feature selection at 1% level of significance, DT should be preferred because it yields ‘if-then’ rules, thereby increasing the comprehensibility of the system.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "7d8826c228fa8a3bb8837754d26b8979",
"text": "This paper summarizes the latest, final version of ISO standard 24617-2 “Semantic annotation framework, Part 2: Dialogue acts”. Compared to the preliminary version ISO DIS 24617-2:2010, described in Bunt et al. (2010), the final version additionally includes concepts for annotating rhetorical relations between dialogue units, defines a full-blown compositional semantics for the Dialogue Act Markup Language DiAML (resulting, as a side-effect, in a different treatment of functional dependence relations among dialogue acts and feedback dependence relations); and specifies an optimally transparent XML-based reference format for the representation of DiAML annotations, based on the systematic application of the notion of ‘ideal concrete syntax’. We describe these differences and briefly discuss the design and implementation of an incremental method for dialogue act recognition, which proves the usability of the ISO standard for automatic dialogue annotation.",
"title": ""
},
{
"docid": "129759aca269b13c80270d2ba7311648",
"text": "Although the Capsule Network (CapsNet) has a better proven performance for the recognition of overlapping digits than Convolutional Neural Networks (CNNs), a large number of matrix-vector multiplications between lower-level and higher-level capsules impede efficient implementation of the CapsNet on conventional hardware platforms. Since three-dimensional (3-D) memristor crossbars provide a compact and parallel hardware implementation of neural networks, this paper provides an architecture design to accelerate convolutional and matrix operations of the CapsNet. By using 3-D memristor crossbars, the PrimaryCaps, DigitCaps, and convolutional layers of a CapsNet perform the matrix-vector multiplications in a highly parallel way. Simulations are conducted to recognize digits from the USPS database and to analyse the work efficiency of the proposed circuits. The proposed design provides a new approach to implement the CapsNet on memristor-based circuits.",
"title": ""
},
{
"docid": "8db7b12cb22d60a698c2aaae31bfbe6a",
"text": "The present article describes the basic therapeutic techniques used in the cognitive-behavioral therapy (CBT) of generalized anxiety disorders and reviews the methodological characteristics and outcomes of 13 controlled clinical trials. The studies in general display rigorous methodology, and their outcomes are quite consistent. CBT has been shown to yield clinical improvements in both anxiety and depression that are superior to no treatment and nonspecific control conditions (and at times to either cognitive therapy alone or behavioral therapy alone) at both posttherapy and follow-up. CBT is also associated with low dropout rates, maintained long-term improvements, and the largest within-group and between-group effect sizes relative to all other comparison conditions.",
"title": ""
},
{
"docid": "3bc9a6686bc9d55a71b036b821ac50e1",
"text": "This paper shows that using SRIOV for InfiniBand can enable virtualized HPC, but only if the NIC tunable parameters are set appropriately. In particular, contrary to common belief, our results show that the default policy of aggressive use of interrupt moderation can have a negative impact on the performance of InfiniBand platforms virtualized using SR-IOV. Careful tuning of interrupt moderation benefits both Native and VM platforms and helps to bridge the gap between native and virtualized performance. For some workloads, the performance gap is reduced by 15-30%.",
"title": ""
},
{
"docid": "850a7daa56011e6c53b5f2f3e33d4c49",
"text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.",
"title": ""
},
{
"docid": "055cb9aca6b16308793944154dc7866a",
"text": "Learning systems depend on three interrelated components: topologies, cost/performance functions, and learning algorithms. Topologies provide the constraints for the mapping, and the learning algorithms offer the means to find an optimal solution; but the solution is optimal with respect to what? Optimality is characterized by the criterion and in neural network literature, this is the least addressed component, yet it has a decisive influence in generalization performance. Certainly, the assumptions behind the selection of a criterion should be better understood and investigated. Traditionally, least squares has been the benchmark criterion for regression problems; considering classification as a regression problem towards estimating class posterior probabilities, least squares has been employed to train neural network and other classifier topologies to approximate correct labels. The main motivation to utilize least squares in regression simply comes from the intellectual comfort this criterion provides due to its success in traditional linear least squares regression applications – which can be reduced to solving a system of linear equations. For nonlinear regression, the assumption of Gaussianity for the measurement error combined with the maximum likelihood principle could be emphasized to promote this criterion. In nonparametric regression, least squares principle leads to the conditional expectation solution, which is intuitively appealing. Although these are good reasons to use the mean squared error as the cost, it is inherently linked to the assumptions and habits stated above. Consequently, there is information in the error signal that is not captured during the training of nonlinear adaptive systems under non-Gaussian distribution conditions when one insists on second-order statistical criteria. This argument extends to other linear-second-order techniques such as principal component analysis (PCA), linear discriminant analysis (LDA), and canonical correlation analysis (CCA). Recent work tries to generalize these techniques to nonlinear scenarios by utilizing kernel techniques or other heuristics. This begs the question: what other alternative cost functions could be used to train adaptive systems and how could we establish rigorous techniques for extending useful concepts from linear and second-order statistical techniques to nonlinear and higher-order statistical learning methodologies?",
"title": ""
}
] |
scidocsrr
|
e932277aa130906fb7fb49d45ccbf07c
|
A Head-Wearable Short-Baseline Stereo System for the Simultaneous Estimation of Structure and Motion
|
[
{
"docid": "4680bed6fb799e6e181cc1c2a4d56947",
"text": "We address the problem of vision-based multi-person tracking in busy pedestrian zones using a stereo rig mounted on a mobile platform. Specifically, we are interested in the application of such a system for supporting path planning algorithms in the avoidance of dynamic obstacles. The complexity of the problem calls for an integrated solution, which extracts as much visual information as possible and combines it through cognitive feedback. We propose such an approach, which jointly estimates camera position, stereo depth, object detections, and trajectories based only on visual information. The interplay between these components is represented in a graphical model. For each frame, we first estimate the ground surface together with a set of object detections. Based on these results, we then address object interactions and estimate trajectories. Finally, we employ the tracking results to predict future motion for dynamic objects and fuse this information with a static occupancy map estimated from dense stereo. The approach is experimentally evaluated on several long and challenging video sequences from busy inner-city locations recorded with different mobile setups. The results show that the proposed integration makes stable tracking and motion prediction possible, and thereby enables path planning in complex and highly dynamic scenes.",
"title": ""
}
] |
[
{
"docid": "8e45716a80300fa86189e99feb26f113",
"text": "BACKGROUND\nWhat is the best way to schedule follow-up appointments? The most popular model requires the patient to negotiate a follow-up appointment time on leaving the office. This process accounts for the majority of follow-up patient scheduling. There are circumstances when this immediate appointment arrangement is not possible, however. The two common processes used to contact patients for follow-up appointments after they have left the office are the postcard reminder method and the prescheduled appointment method.\n\n\nMETHODS\nIn 2001 the two methods used to contact patients for follow-up appointments after they had left the clinic were used for all 2,116 reappointment patients at an ophthalmology practice at Dartmouth-Hitchcock Medical Center. The number of completed successful appointments, the no-show rate, and patient satisfaction for each method were calculated.\n\n\nRESULTS\nA larger number of patient reappointments were completed using the prescheduled appointment procedure than the postcard reminder system (74% vs 54%). The difference between completed and pending appointments (minus no-shows) of the two methods equaled 163 patients per quarter, or 652 patients per year. Additional revenues associated with use of the prescheduled appointment letter method were estimated at $594,600 for 3 years.\n\n\nSUMMARY\nUsing the prescheduled appointment method with a patient notification letter is advised when patients do not schedule their appointments on the way out of the office.",
"title": ""
},
{
"docid": "75cab773845f95a60fa7fbe942144c1d",
"text": "This paper proposes a modular converter system to achieve high power density, high system bandwidth and scalability for isolated bidirectional DC/AC and AC/DC three-phase power conversion. The approach is based on high frequency isolated series resonant converter (SRC) modules with series output connection and a low frequency three-phase unfolder. The performance objectives are realized through elimination of traditional low frequency passive filters used in PWM inverters and instead require high control bandwidth in the SRC modules to achieve high quality AC waveforms. The system operation and performance are verified with simulation and experimental results for a 1 kW prototype.",
"title": ""
},
{
"docid": "5ee610b61deefffc1b054d908587b406",
"text": "Self-shaping of curved structures, especially those involving flexible thin layers, is attracting increasing attention because of their broad potential applications in, e.g., nanoelectromechanical andmicroelectromechanical systems, sensors, artificial skins, stretchable electronics, robotics, and drug delivery. Here, we provide an overview of recent experimental, theoretical, and computational studies on the mechanical selfassembly of strain-engineered thin layers, with an emphasis on systems in which the competition between bending and stretching energy gives rise to a variety of deformations, such as wrinkling, rolling, and twisting. We address the principle of mechanical instabilities, which is often manifested in wrinkling or multistability of strain-engineered thin layers. The principles of shape selection and transition in helical ribbons are also systematically examined. We hope that a more comprehensive understanding of the mechanical principles underlying these rich phenomena can foster the development of techniques for manufacturing functional three-dimensional structures on demand for a broad spectrum of engineering applications.",
"title": ""
},
{
"docid": "1bfc1972a32222a1b5816bb040040374",
"text": "BACKGROUND\nSkeletal muscle is key to motor development and represents a major metabolic end organ that aids glycaemic regulation.\n\n\nOBJECTIVES\nTo create gender-specific reference curves for fat-free mass (FFM) and appendicular (limb) skeletal muscle mass (SMMa) in children and adolescents. To examine the muscle-to-fat ratio in relation to body mass index (BMI) for age and gender.\n\n\nMETHODS\nBody composition was measured by segmental bioelectrical impedance (BIA, Tanita BC418) in 1985 Caucasian children aged 5-18.8 years. Skeletal muscle mass data from the four limbs were used to derive smoothed centile curves and the muscle-to-fat ratio.\n\n\nRESULTS\nThe centile curves illustrate the developmental patterns of %FFM and SMMa. While the %FFM curves differ markedly between boys and girls, the SMMa (kg), %SMMa and %SMMa/FFM show some similarities in shape and variance, together with some gender-specific characteristics. Existing BMI curves do not reveal these gender differences. Muscle-to-fat ratio showed a very wide range with means differing between boys and girls and across fifths of BMI z-score.\n\n\nCONCLUSIONS\nBIA assessment of %FFM and SMMa represents a significant advance in nutritional assessment since these body composition components are associated with metabolic health. Muscle-to-fat ratio has the potential to provide a better index of future metabolic health.",
"title": ""
},
{
"docid": "bb5c91095f48c6ac9b66a8d223b6aab8",
"text": "This paper analyses the influence of current harmonics on protections devices connected to the power system which can cause serious problems. This paper outlines work that has been conducted to test of overcurrent relay and to identify the impacts of harmonics on distribution utility protection and control systems. The overcurrent relay REJ 525 is based on a microprocessor environment. A self-supervision system continuously monitors the operation of the relay.",
"title": ""
},
{
"docid": "4b1c1194a9292adf76452eda03f7f67f",
"text": "Fin-type field-effect transistors (FinFETs) are promising substitutes for bulk CMOS at the nanoscale. FinFETs are double-gate devices. The two gates of a FinFET can either be shorted for higher perfomance or independently controlled for lower leakage or reduced transistor count. This gives rise to a rich design space. This chapter provides an introduction to various interesting FinFET logic design styles, novel circuit designs, and layout considerations.",
"title": ""
},
{
"docid": "03a02b03e0ff4364ee8aec81d1fca20b",
"text": "GOALS\nThe aim of this study was to analyze the performance of Fuji Intelligent Color Enhancement (FICE) using the classification of Kudo in the differentiation of neoplastic and non-neoplastic raised lesions in ulcerative colitis (UC).\n\n\nBACKGROUND\nThe Kudo classification of mucosal pit patterns is an aid for the differential diagnosis of colorectal polyps in the general population, but no systematic studies are available for all forms of raised lesions in UC.\n\n\nSTUDY\nAll raised, polypoid and nonpolypoid, lesions found during consecutive surveillance colonoscopies with FICE for long-standing UC were included. In the primary prospective analysis, the Kudo classification was used to predict the histology by FICE. In a post hoc analysis, further endoscopic markers were also explored.\n\n\nRESULTS\nTwo hundred and five lesions (mean size, 8 mm; range, 2 to 30 mm) from 59 patients (mean age, 56 y; range, 21 to 79 y) were analyzed. Twenty-three neoplastic (11%), 18 hyperplastic (9%), and 164 inflammatory (80%) lesions were found. Thirty-one lesions (15%), none of which were neoplastic, were unclassifiable according to Kudo. After logistic regression, a strong negative association resulted between endoscopic activity and neoplasia, whereas the presence of a fibrin cap was significantly associated with endoscopic activity. Using FICE, the sensitivity, specificity, and positive and negative likelihood ratios of the Kudo classification were 91%, 76%, 3.8, and 0.12, respectively. The corresponding values by adding the fibrin cap as a marker of inflammation were 91%, 93%, 13, and 0.10, respectively.\n\n\nCONCLUSIONS\nFICE can help to predict the histology of raised lesions in UC. A new classification of pit patterns, based on inflammatory markers, should be developed in the setting of UC to improve the diagnostic performance.",
"title": ""
},
{
"docid": "d5870092a3e8401654b5b9948c77cb0a",
"text": "Recent research shows that there has been increased interest in investigating the role of mood and emotions in the HCI domain. Our moods, however, are complex. They are affected by many dynamic factors and can change multiple times throughout each day. Furthermore, our mood can have significant implications in terms of our experiences, our actions and most importantly on our interactions with other people. We have developed MobiMood, a proof-of-concept social mobile application that enables groups of friends to share their moods with each other. In this paper, we present the results of an exploratory field study of MobiMood, focusing on explicit mood sharing in-situ. Our results highlight that certain contextual factors had an effect on mood and the interpretation of moods. Furthermore, mood sharing and mood awareness appear to be good springboards for conversations and increased communication among users. These and other findings lead to a number of key implications in the design of mobile social awareness applications.",
"title": ""
},
{
"docid": "dd302eec2a7ff7bf0fb9310437f7291e",
"text": "Traditional electric-powered wheelchairs are normally controlled by users via joysticks, which cannot satisfy the needs of elderly and disabled users who have restricted limb movements caused by some diseases such as parkingson’s disease and quadriplegics. This paper presents a novel hands-free control system for intelligent wheelchairs based on visual recognition of head gestures. The traditional Adaboost face detection algorithm and Camshift object tracking algorithm are combined in our system to achieve accurate face detection, tracking and gesture recognition in real time. It is intended to be used as the human-friendly interface for elderly and disabled people to operate our intelligent wheelchair using their head gestures rahter than their hands. Experimental results are given to demonstrate the feasibility and performance of the proposed head gesture based control strategy.",
"title": ""
},
{
"docid": "d2af69233bf30376afb81b204b063c81",
"text": "Exploiting the security vulnerabilities in web browsers, web applications and firewalls is a fundamental trait of cross-site scripting (XSS) attacks. Majority of web population with basic web awareness are vulnerable and even expert web users may not notice the attack to be able to respond in time to neutralize the ill effects of attack. Due to their subtle nature, a victimized server, a compromised browser, an impersonated email or a hacked web application tends to keep this form of attacks alive even in the present times. XSS attacks severely offset the benefits offered by Internet based services thereby impacting the global internet community. This paper focuses on defense, detection and prevention mechanisms to be adopted at various network doorways to neutralize XSS attacks using open source tools.",
"title": ""
},
{
"docid": "201d9105d956bc8cb8d692490d185487",
"text": "BACKGROUND\nDespite its evident clinical benefits, single-incision laparoscopic surgery (SILS) imposes inherent limitations of collision between external arms and inadequate triangulation because multiple instruments are inserted through a single port at the same time.\n\n\nMETHODS\nA robot platform appropriate for SILS was developed wherein an elbowed instrument can be equipped to easily create surgical triangulation without the interference of robot arms. A novel joint mechanism for a surgical instrument actuated by a rigid link was designed for high torque transmission capability.\n\n\nRESULTS\nThe feasibility and effectiveness of the robot was checked through three kinds of preliminary tests: payload, block transfer, and ex vivo test. Measurements showed that the proposed robot has a payload capability >15 N with 7 mm diameter.\n\n\nCONCLUSIONS\nThe proposed robot is effective and appropriate for SILS, overcoming inadequate triangulation and improving workspace and traction force capability.",
"title": ""
},
{
"docid": "34d16a5eb254846f431e2c716309e20a",
"text": "AIM\nWe investigated the uptake and pharmacokinetics of l-ergothioneine (ET), a dietary thione with free radical scavenging and cytoprotective capabilities, after oral administration to humans, and its effect on biomarkers of oxidative damage and inflammation.\n\n\nRESULTS\nAfter oral administration, ET is avidly absorbed and retained by the body with significant elevations in plasma and whole blood concentrations, and relatively low urinary excretion (<4% of administered ET). ET levels in whole blood were highly correlated to levels of hercynine and S-methyl-ergothioneine, suggesting that they may be metabolites. After ET administration, some decreasing trends were seen in biomarkers of oxidative damage and inflammation, including allantoin (urate oxidation), 8-hydroxy-2'-deoxyguanosine (DNA damage), 8-iso-PGF2α (lipid peroxidation), protein carbonylation, and C-reactive protein. However, most of the changes were non-significant.\n\n\nINNOVATION\nThis is the first study investigating the administration of pure ET to healthy human volunteers and monitoring its uptake and pharmacokinetics. This compound is rapidly gaining attention due to its unique properties, and this study lays the foundation for future human studies.\n\n\nCONCLUSION\nThe uptake and retention of ET by the body suggests an important physiological function. The decreasing trend of oxidative damage biomarkers is consistent with animal studies suggesting that ET may function as a major antioxidant but perhaps only under conditions of oxidative stress. Antioxid. Redox Signal. 26, 193-206.",
"title": ""
},
{
"docid": "030eb906555eeae5b9e1073c0346618a",
"text": "In many real applications, RDF (Resource Description Framework) has been widely used as a W3C standard to describe data in the Semantic Web. In practice, RDF data may often suffer from the unreliability of their data sources, and exhibit errors or inconsistencies. In this paper, we model such unreliable RDF data by probabilistic RDF graphs, and study an important problem, keyword search query over probabilistic RDF graphs (namely, the pg-KWS query). To retrieve meaningful keyword search answers, we design the score rankings for subgraph answers specific for RDF data. Furthermore, we propose effective pruning methods (via offline pre-computed score bounds and probabilistic threshold) to quickly filter out false alarms. We construct an index over the pre-computed data for RDF, and present an efficient query answering approach through the index. Extensive experiments have been conducted to verify the effectiveness and efficiency of our proposed approaches.",
"title": ""
},
{
"docid": "6dca32a1e4ba096300c435fd0dce7858",
"text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading inverse problem theory and methods for model parameter estimation is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.",
"title": ""
},
{
"docid": "a4d6e4926ba7eb63b4d587ec4e983363",
"text": "Modern research has broadened scientific knowledge and revealed the interdisciplinary nature of the sciences. For today's students, this advance translates to learning a more diverse range of concepts, usually in less time, and without supporting resources. Students can benefit from technology-enhanced learning supplements that unify concepts and are delivered on-demand over the Internet. Such supplements, like imaging informatics databases, serve as innovative references for biomedical information, but could improve their interaction interfaces to support learning. With information from these digital datasets, multimedia learning tools can be designed to transform learning into an active process where students can visualize relationships over time, interact with dynamic content, and immediately test their knowledge. This approach bridges knowledge gaps, fosters conceptual understanding, and builds problem-solving and critical thinking skills-all essential components to informatics training for science and medicine. Additional benefits include cost-free access and ease of dissemination over the Internet or CD-ROM. However, current methods for the design of multimedia learning modules are not standardized and lack strong instructional design. Pressure from administrators at the top and students from the bottom are pushing faculty to use modern technology to address the learning needs and expectations of contemporary students. Yet, faculty lack adequate support and training to adopt this new approach. So how can faculty learn to create educational multimedia materials for their students? This paper provides guidelines on best practices in educational multimedia design, derived from the Virtual Labs Project at Stanford University. The development of a multimedia module consists of five phases: (1) understand the learning problem and the users needs; (2) design the content to harness the enabling technologies; (3) build multimedia materials with web style standards and human factors principles; (4) user testing; (5) evaluate and improve design.",
"title": ""
},
{
"docid": "33aca3fca17c8a9c786b35b4da4de47c",
"text": "This paper is concerned with the design of the power control system for a single-phase voltage source inverter feeding a parallel resonant induction heating load. The control of the inverter output current, meaning the active component of the current through the induction coil when the control frequency is equal or slightly exceeds the resonant frequency, is achieved by a Proportional-IntegralDerivative controller tuned in accordance with the Modulus Optimum criterion in Kessler variant. The response of the current loop for different work pipes and set currents has been tested by simulation under the Matlab-Simulink environment and illustrates a very good behavior of the control system.",
"title": ""
},
{
"docid": "c6fdea5b2f3d33c2b12d9c4ae797cafb",
"text": "When is the best time to purchase a flight? Flight prices fluctuate constantly, so purchasing at different times could mean large differences in price. This project uses machine learning classification in order to predict at a given time, considering properties of a flight, whether one should book the flight or wait for a better price.",
"title": ""
},
{
"docid": "44ef466e59603fc90a30217e7fab00cf",
"text": "We address the problem of content-aware, foresighted resource reciprocation for media streaming over peer-to-peer (P2P) networks. The envisioned P2P network consists of autonomous and self-interested peers trying to maximize their individual utilities. The resource reciprocation among such peers is modeled as a stochastic game and peers determine the optimal strategies for resource reciprocation using a Markov Decision Process (MDP) framework. Unlike existing solutions, this framework takes the content and the characteristics of the video signal into account by introducing an artificial currency in order to maximize the video quality in the entire network.",
"title": ""
},
{
"docid": "58b2ee3d0a4f61d4db883bc0a896f8f4",
"text": "While applications for mobile devices have become extremely important in the last few years, little public information exists on mobile application usage behavior. We describe a large-scale deployment-based research study that logged detailed application usage information from over 4,100 users of Android-powered mobile devices. We present two types of results from analyzing this data: basic descriptive statistics and contextual descriptive statistics. In the case of the former, we find that the average session with an application lasts less than a minute, even though users spend almost an hour a day using their phones. Our contextual findings include those related to time of day and location. For instance, we show that news applications are most popular in the morning and games are at night, but communication applications dominate through most of the day. We also find that despite the variety of apps available, communication applications are almost always the first used upon a device's waking from sleep. In addition, we discuss the notion of a virtual application sensor, which we used to collect the data.",
"title": ""
}
] |
scidocsrr
|
dba1831a4689d4d5928b7a5541ec6ce5
|
Physics-based fast single image fog removal
|
[
{
"docid": "9323c74e39a677c28d1c082b12e1f587",
"text": "Atmospheric conditions induced by suspended particles, such as fog and haze, severely degrade image quality. Restoring the true scene colors (clear day image) from a single image of a weather-degraded scene remains a challenging task due to the inherent ambiguity between scene albedo and depth. In this paper, we introduce a novel probabilistic method that fully leverages natural statistics of both the albedo and depth of the scene to resolve this ambiguity. Our key idea is to model the image with a factorial Markov random field in which the. scene albedo and depth are. two statistically independent latent layers. We. show that we may exploit natural image and depth statistics as priors on these hidden layers and factorize a single foggy image via a canonical Expectation Maximization algorithm with alternating minimization. Experimental results show that the proposed method achieves more accurate restoration compared to state-of-the-art methods that focus on only recovering scene albedo or depth individually.",
"title": ""
},
{
"docid": "c5427ac777eaa3ecf25cb96a124eddfe",
"text": "One source of difficulties when processing outdoor images is the presence of haze, fog or smoke which fades the colors and reduces the contrast of the observed objects. We introduce a novel algorithm and variants for visibility restoration from a single image. The main advantage of the proposed algorithm compared with other is its speed: its complexity is a linear function of the number of image pixels only. This speed allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera. Another advantage is the possibility to handle both color images or gray level images since the ambiguity between the presence of fog and the objects with low color saturation is solved by assuming only small objects can have colors with low saturation. The algorithm is controlled only by a few parameters and consists in: atmospheric veil inference, image restoration and smoothing, tone mapping. A comparative study and quantitative evaluation is proposed with a few other state of the art algorithms which demonstrates that similar or better quality results are obtained. Finally, an application is presented to lane-marking extraction in gray level images, illustrating the interest of the approach.",
"title": ""
}
] |
[
{
"docid": "e7d3fae34553c61827b78e50c2e205ee",
"text": "Speaker Identification (SI) is the process of identifying the speaker from a given utterance by comparing the voice biometrics of the utterance with those utterance models stored beforehand. SI technologies are taken a new direction due to the advances in artificial intelligence and have been used widely in various domains. Feature extraction is one of the most important aspects of SI, which significantly influences the SI process and performance. This systematic review is conducted to identify, compare, and analyze various feature extraction approaches, methods, and algorithms of SI to provide a reference on feature extraction approaches for SI applications and future studies. The review was conducted according to Kitchenham systematic review methodology and guidelines, and provides an in-depth analysis on proposals and implementations of SI feature extraction methods discussed in the literature between year 2011 and 2106. Three research questions were determined and an initial set of 535 publications were identified to answer the questions. After applying exclusion criteria 160 related publications were shortlisted and reviewed in this paper; these papers were considered to answer the research questions. Results indicate that pure Mel-Frequency Cepstral Coefficients (MFCCs) based feature extraction approaches have been used more than any other approach. Furthermore, other MFCC variations, such as MFCC fusion and cleansing approaches, are proven to be very popular as well. This study identified that the current SI research trend is to develop a robust universal SI framework to address the important problems of SI such as adaptability, complexity, multi-lingual recognition, and noise robustness. The results presented in this research are based on past publications, citations, and number of implementations with citations being most relevant. This paper also presents the general process of SI. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8aa6e81b8dd30fb562e5a31356e61a03",
"text": "In this paper, we propose a honeypot architecture for detecting and analyzing unknown network attacks. The main focus of our approach lies in improving the “significance” of recorded events and network traffic that need to be analyzed by a human network security operator in order to identify a new attacking pattern. Our architecture aims to achieve this goal by combining three main components: 1. a packet filter that suppresses all known attacking packets, 2. a proxy host that performs session-individual logging of network traffic, and 3. a honeypot host that executes actual network services to be potentially attacked from the Internet in a carefully supervised environment and that reports back to the proxy host upon the detection of suspicious behavior. Experiences with our first prototype of this concept show that it is relatively easy to specify suspicious behavior and that traffic belonging to an attack can be successfully identified and marked.",
"title": ""
},
{
"docid": "133dc8b0eb097da0d0a815821732d288",
"text": "In this paper, we revisit fully homomorphic encryption (FHE) based on GSW and its ring variants. We notice that the internal product of GSW can be replaced by a simpler external product between a GSW and an LWE ciphertext. We show that the bootstrapping scheme FHEW of Ducas and Micciancio [14] can be expressed only in terms of this external product. As a result, we obtain a speed up from less than 1 second to less than 0.1 seconds. We also reduce the 1GB bootstrapping key size to 24MB, preserving the same security levels, and we improve the noise propagation overhead by replacing exact decomposition algorithms with approximate ones. Moreover, our external product allows to explain the unique asymmetry in the noise propagation of GSW samples and makes it possible to evaluate deterministic automata homomorphically as in [16] in an efficient way with a noise overhead only linear in the length of the tested word. Finally, we provide an alternative practical analysis of LWE based scheme, which directly relates the security parameter to the error rate of LWE and the entropy of the LWE secret key.",
"title": ""
},
{
"docid": "197c9e1b89bf4d28ab8fc4be2d617370",
"text": "OBJECTIVE\nParkinson's disease (PD) is a progressive neurological disorder characterised by a large number of motor and non-motor features that can impact on function to a variable degree. This review describes the clinical characteristics of PD with emphasis on those features that differentiate the disease from other parkinsonian disorders.\n\n\nMETHODS\nA MedLine search was performed to identify studies that assess the clinical characteristics of PD. Search terms included \"Parkinson's disease\", \"diagnosis\" and \"signs and symptoms\".\n\n\nRESULTS\nBecause there is no definitive test for the diagnosis of PD, the disease must be diagnosed based on clinical criteria. Rest tremor, bradykinesia, rigidity and loss of postural reflexes are generally considered the cardinal signs of PD. The presence and specific presentation of these features are used to differentiate PD from related parkinsonian disorders. Other clinical features include secondary motor symptoms (eg, hypomimia, dysarthria, dysphagia, sialorrhoea, micrographia, shuffling gait, festination, freezing, dystonia, glabellar reflexes), non-motor symptoms (eg, autonomic dysfunction, cognitive/neurobehavioral abnormalities, sleep disorders and sensory abnormalities such as anosmia, paresthesias and pain). Absence of rest tremor, early occurrence of gait difficulty, postural instability, dementia, hallucinations, and the presence of dysautonomia, ophthalmoparesis, ataxia and other atypical features, coupled with poor or no response to levodopa, suggest diagnoses other than PD.\n\n\nCONCLUSIONS\nA thorough understanding of the broad spectrum of clinical manifestations of PD is essential to the proper diagnosis of the disease. Genetic mutations or variants, neuroimaging abnormalities and other tests are potential biomarkers that may improve diagnosis and allow the identification of persons at risk.",
"title": ""
},
{
"docid": "c3337e6fa136bd535f88ee4fad520d9e",
"text": "Microprocessor execution speeds are improving at a rate of 50%-80% per year while DRAM access times are improving at a much lower rate of 5%-10% per year. Computer systems are rapidly approaching the point at which overall system performance is determined not by the speed of the CPU but by the memory system speed. We present a high performance memory system architecture that overcomes the growing speed disparity between high performance microprocessors and current generation DRAMs. A novel prediction and prefetching technique is combined with a distributed cache architecture to build a high performance memory system. We use a table based prediction scheme with a prediction cache to prefetch data from the on-chip DRAM array to an on-chip SRAM prefetch bu er. By prefetching data we are able to hide the large latency associated with DRAM access and cycle times. Our experiments show that with a small (32 KB) prediction cache we can get an e ective main memory access time that is close to the access time of larger secondary caches.",
"title": ""
},
{
"docid": "b9e8007220be2887b9830c05c283f8a5",
"text": "INTRODUCTION\nHealth-care professionals are trained health-care providers who occupy a potential vanguard position in human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS) prevention programs and the management of AIDS patients. This study was performed to assess HIV/AIDS-related knowledge, attitude, and practice (KAP) and perceptions among health-care professionals at a tertiary health-care institution in Uttarakhand, India, and to identify the target group where more education on HIV is needed.\n\n\nMATERIALS AND METHODS\nA cross-sectional KAP survey was conducted among five groups comprising consultants, residents, medical students, laboratory technicians, and nurses. Probability proportional to size sampling was used for generating random samples. Data analysis was performed using charts and tables in Microsoft Excel 2016, and statistical analysis was performed using the Statistical Package for the Social Science software version 20.0.\n\n\nRESULTS\nMost participants had incomplete knowledge regarding the various aspects of HIV/AIDS. Attitude in all the study groups was receptive toward people living with HIV/AIDS. Practical application of knowledge was best observed in the clinicians as well as medical students. Poor performance by technicians and nurses was observed in prevention and prophylaxis. All groups were well informed about the National AIDS Control Policy except technicians.\n\n\nCONCLUSION\nPoor knowledge about HIV infection, particularly among the young medical students and paramedics, is evidence of the lacunae in the teaching system, which must be kept in mind while formulating teaching programs. As suggested by the respondents, Information Education Communication activities should be improvised making use of print, electronic, and social media along with interactive awareness sessions, regular continuing medical educations, and seminars to ensure good quality of safe modern medical care.",
"title": ""
},
{
"docid": "3d5e2e0f0b9cefd240de2fd952eaf961",
"text": "This paper focuses on detecting anomalies in a digital video broadcasting (DVB) system from providers’ perspective. We learn a probabilistic deterministic real timed automaton profiling benign behavior of encryption control in the DVB control access system. This profile is used as a one-class classifier. Anomalous items in a testing sequence are detected when the sequence is not accepted by the learned model.",
"title": ""
},
{
"docid": "5c03be451f3610f39c94043d30314617",
"text": "Syphilis is a sexually transmitted disease (STD) produced by Treponema pallidum, which mainly affects humans and is able to invade practically any organ in the body. Its infection facilitates the transmission of other STDs. Since the end of the last decade, successive outbreaks of syphilis have been reported in most western European countries. Like other STDs, syphilis is a notifiable disease in the European Union. In Spain, epidemiological information is obtained nationwide via the country's system for recording notifiable diseases (Spanish acronym EDO) and the national microbiological information system (Spanish acronym SIM), which compiles information from a network of 46 sentinel laboratories in twelve Spanish regions. The STDs that are epidemiologically controlled are gonococcal infection, syphilis, and congenital syphilis. The incidence of each of these diseases is recorded weekly. The information compiled indicates an increase in the cases of syphilis and gonococcal infection in Spain in recent years. According to the EDO, in 1999, the number of cases of syphilis per 100,000 inhabitants was recorded to be 1.69, which has risen to 4.38 in 2007. In this article, we review the reappearance and the evolution of this infectious disease in eight European countries, and alert dentists to the importance of a) diagnosing sexually-transmitted diseases and b) notifying the centres that control them.",
"title": ""
},
{
"docid": "38db17ce89e1a046d7d37213b59c8163",
"text": "Cardinality estimation has a wide range of applications and is of particular importance in database systems. Various algorithms have been proposed in the past, and the HyperLogLog algorithm is one of them. In this paper, we present a series of improvements to this algorithm that reduce its memory requirements and significantly increase its accuracy for an important range of cardinalities. We have implemented our proposed algorithm for a system at Google and evaluated it empirically, comparing it to the original HyperLogLog algorithm. Like HyperLogLog, our improved algorithm parallelizes perfectly and computes the cardinality estimate in a single pass.",
"title": ""
},
{
"docid": "80cca2365ac99b9570c13b2d76b738e8",
"text": "Video clickstream data are important for understanding user behaviors and improving online video services. Various visual analytics techniques have been proposed to explore patterns in these data. However, those techniques are mainly developed for analysis and do not sufficiently support presentations. It is still difficult for data analysts to convey their findings to an audience without prior knowledge. In this paper, we propose to use animated narrative visualization to present video clickstream data. Compared with traditional methods which directly turn click events into animations, our animated narrative visualization focuses on conveying the patterns in the data to a general audience and adopts two novel designs, non-linear time mapping and foreshadowing, to make the presentation more engaging and interesting. Our non-linear time mapping method keeps the interesting parts as the focus of the animation while compressing the uninteresting parts as the context. The foreshadowing techniques can engage the audience and alert them to the events in the animation. Our user study indicates the effectiveness of our system and provides guidelines for the design of similar systems.",
"title": ""
},
{
"docid": "3e767477d7b2f36badd1f581262794cd",
"text": "Inspired by the path transform (PT) algorithm of Zelinsky et al. the novel algorithm of complete coverage called complete coverage D* (CCD*) algorithm is developed, based on the D* search of the two-dimensional occupancy grid map of the environment. Unlike the original PT algorithm the CCD* algorithm takes the robot’s dimension into account, with emphasis on safety of motion and reductions of path length and search time. Additionally, the proposed CCD* algorithm has ability to produce new complete coverage path as the environment changes. The algorithms were tested on a Pioneer 3DX mobile robot equipped with a laser range finder.",
"title": ""
},
{
"docid": "748eae887bcda0695cbcf1ba1141dd79",
"text": "A wideband bandpass filter (BPF) with reconfigurable bandwidth (BW) is proposed based on a parallel-coupled line structure and a cross-shaped resonator with open stubs. The p-i-n diodes are used as the tuning elements, which can implement three reconfigurable BW states. The prototype of the designed filter reports an absolute BW tuning range of 1.22 GHz, while the fractional BW is varied from 34.8% to 56.5% when centered at 5.7 GHz. The simulation and measured results are in good agreement. Comparing with previous works, the proposed reconfigurable BPF features wider BW tuning range with maximum number of tuning states.",
"title": ""
},
{
"docid": "5ca75490c015685a1fc670b2ee5103ff",
"text": "The motion of the hand is the result of a complex interaction of extrinsic and intrinsic muscles of the forearm and hand. Whereas the origin of the extrinsic hand muscles is mainly located in the forearm, the origin (and insertion) of the intrinsic muscles is located within the hand itself. The intrinsic muscles of the hand include the lumbrical muscles I to IV, the dorsal and palmar interosseous muscles, the muscles of the thenar eminence (the flexor pollicis brevis, the abductor pollicis brevis, the adductor pollicis, and the opponens pollicis), as well as the hypothenar muscles (the abductor digiti minimi, flexor digiti minimi, and opponens digiti minimi). The thenar muscles control the motion of the thumb, and the hypothenar muscles control the motion of the little finger.1,2 The intrinsic muscles of the hand have not received much attention in the radiologic literature, despite their importance in moving the hand.3–7 Prospective studies on magnetic resonance (MR) imaging of the intrinsic muscles of the hand are rare, especially with a focus on new imaging techniques.6–8 However, similar to the other skeletal muscles, the intrinsic muscles of the hand can be affected by many conditions with resultant alterations in MR signal intensity ormorphology (e.g., with congenital abnormalities, inflammation, infection, trauma, neurologic disorders, and neoplastic conditions).1,9–12 MR imaging plays an important role in the evaluation of skeletal muscle disorders. Considered the most reliable diagnostic imaging tool, it can show subtle changes of signal and morphology, allow reliable detection and documentation of abnormalities, as well as provide a clear baseline for follow-up studies.13 It is also observer independent and allows second-opinion evaluation that is sometimes necessary, for example before a multidisciplinary discussion. Few studies exist on the clinical impact of MR imaging of the intrinsic muscles of the hand. A study by Andreisek et al in 19 patients with clinically evident or suspected intrinsic hand muscle abnormalities showed that MR imaging of the hand is useful and correlates well with clinical findings in patients with posttraumatic syndromes, peripheral neuropathies, myositis, and tumorous lesions, as well as congenital abnormalities.14,15 Because there is sparse literature on the intrinsic muscles of the hand, this review article offers a comprehensive review of muscle function and anatomy, describes normal MR imaging anatomy, and shows a spectrum of abnormal imaging findings.",
"title": ""
},
{
"docid": "ab62cea43aa3ddc848cf129f0ea391a4",
"text": "In this work we propose a method for converting triangular meshes into LEGO bricks through a voxel representation of boundary meshes. We present a novel voxelization approach that uses points sampled from a surface model to define which cubes (voxels) and their associated colors will compose the model. All steps of the algorithm were implemented on the GPU and real-time performance was achieved with satisfactory volumetric resolutions. Rendering results are illustrated using realistic graphics techniques such as screen space ambient occlusion and irradiance maps.",
"title": ""
},
{
"docid": "5dca3981eb4c353712b51f3cc32ff3ec",
"text": "We present a new classification architecture based on autoassociative neural networks that are used to learn discriminant models of each class. The proposed architecture has several interesting properties with respect to other model-based classifiers like nearest-neighbors or radial basis functions: it has a low computational complexity and uses a compact distributed representation of the models. The classifier is also well suited for the incorporation of a priori knowledge by means of a problem-specific distance measure. In particular, we will show that tangent distance (Simard, Le Cun, & Denker, 1993) can be used to achieve transformation invariance during learning and recognition. We demonstrate the application of this classifier to optical character recognition, where it has achieved state-of-the-art results on several reference databases. Relations to other models, in particular those based on principal component analysis, are also discussed.",
"title": ""
},
{
"docid": "647b76de7edbca25accdd65fed64d34e",
"text": "Despite the evidence that social video conveys rich human personality information, research investigating the automatic prediction of personality impressions in vlogging has shown that, amongst the Big-Five traits, automatic nonverbal behavioral cues are useful to predict mainly the Extraversion trait. This finding, also reported in other conversational settings, indicates that personality information may be coded in other behavioral dimensions like the verbal channel, which has been less studied in multimodal interaction research. In this paper, we address the task of predicting personality impressions from vloggers based on what they say in their YouTube videos. First, we use manual transcripts of vlogs and verbal content analysis techniques to understand the ability of verbal content for the prediction of crowdsourced Big-Five personality impressions. Second, we explore the feasibility of a fully-automatic framework in which transcripts are obtained using automatic speech recognition (ASR). Our results show that the analysis of error-free verbal content is useful to predict four of the Big-Five traits, three of them better than using nonverbal cues, and that the errors caused by the ASR system decrease the performance significantly.",
"title": ""
},
{
"docid": "c676ccb53845c7108e07d9b08bccab46",
"text": "-This paper is describing the recently introduced proportional-resonant (PR) controllers and their suitability for grid-connected converters current control. It is shown that the known shortcomings associated with PI controllers like steady-state error for single-phase converters and the need of decoupling for three-phase converters can be alleviated. Additionally, selective harmonic compensation is also possible with PR controllers. Suggested control-diagrams for three-phase grid converters and active filters are also presented. A practical application of PR current control for a photovoltaic (PV) inverter is also described. Index Terms current controller, grid converters, photovoltaic inverter",
"title": ""
},
{
"docid": "df152d3c4dd667b642415b14c25b4513",
"text": "We propose a methodology for automatic synthesis of embedded control software that accounts for exogenous disturbances. The resulting system is guaranteed, by construction, to satisfy a given specification expressed in linear temporal logic. The embedded control software consists of three components: a goal generator, a trajectory planner, and a continuous controller. We demonstrate the effectiveness of the proposed technique through an example of an autonomous vehicle navigating an urban environment. This example also illustrates that the system is not only robust with respect to exogenous disturbances but also capable of handling violation of the environment assumptions.",
"title": ""
},
{
"docid": "b00311730b7b9b4f79cdd7bde5aa84f6",
"text": "While neural networks demonstrate stronger capabilities in pattern recognition nowadays, they are also becoming larger and deeper. As a result, the effort needed to train a network also increases dramatically. In many cases, it is more practical to use a neural network intellectual property (IP) that an IP vendor has already trained. As we do not know about the training process, there can be security threats in the neural IP: the IP vendor (attacker) may embed hidden malicious functionality, i.e neural Trojans, into the neural IP. We show that this is an effective attack and provide three mitigation techniques: input anomaly detection, re-training, and input preprocessing. All the techniques are proven effective. The input anomaly detection approach is able to detect 99.8% of Trojan triggers although with 12.2% false positive. The re-training approach is able to prevent 94.1% of Trojan triggers from triggering the Trojan although it requires that the neural IP be reconfigurable. In the input preprocessing approach, 90.2% of Trojan triggers are rendered ineffective and no assumption about the neural IP is needed.",
"title": ""
},
{
"docid": "cb3f1598c2769b373a20b4dddd8b35ea",
"text": "An image hash should be (1) robust to allowable operations and (2) sensitive to illegal manipulations and distinct queries. Some applications also require the hash to be able to localize image tampering. This requires the hash to contain both robust content and alignment information to meet the above criterion. Fulfilling this is difficult because of two contradictory requirements. First, the hash should be small and second, to verify authenticity and then localize tampering, the amount of information in the hash about the original required would be large. Hence a tradeoff between these requirements needs to be found. This paper presents an image hashing method that addresses this concern, to not only detect but also localize tampering using a small signature (< 1kB). Illustrative experiments bring out the efficacy of the proposed method compared to existing methods.",
"title": ""
}
] |
scidocsrr
|
1832330ea64a4151afb33bb223fac395
|
Statistical Post-editing and Quality Estimation for Machine Translation Systems
|
[
{
"docid": "afd00b4795637599f357a7018732922c",
"text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.",
"title": ""
}
] |
[
{
"docid": "dbc66199d6873d990a8df18ce7adf01d",
"text": "Facebook has rapidly become the most popular Social Networking Site (SNS) among faculty and students in higher education institutions in recent years. Due to the various interactive and collaborative features Facebook supports, it offers great opportunities for higher education institutions to support student engagement and improve different aspects of teaching and learning. To understand the social aspects of Facebook use among students and how they perceive using it for academic purposes, an exploratory survey has been distributed to 105 local and international students at a large public technology university in Malaysia. Results reveal consistent patterns of usage compared to what has been reported in literature reviews in relation to the intent of use and the current use for educational purposes. A comparison was conducted of male and female, international and local, postgraduate and undergraduate students respectively, using nonparametric tests. The results indicate that the students’ perception of using Facebook for academic purposes is not significantly related to students’ gender or students’ background; while it is significantly related to study level and students’ experience. Moreover, based on the overall results of the survey and literature reviews, the paper presents recommendations and suggestions for further research of social networking in a higher education context.",
"title": ""
},
{
"docid": "ed73d3c3e2961e10ff5843ef1062d7fe",
"text": "Barcodes have been long used for data storage. Detecting and locating barcodes in images of complex background is an essential yet challenging step in the process of automatic barcode reading. This work proposed an algorithm that localizes and segments two-dimensional quick response (QR) barcodes. The localization involved a convolutional neural network that could detect partial QR barcodes. Majority voting was then applied to determine barcode locations. Then image processing algorithms were implemented to segment barcodes from the background. Experimental results shows that the proposed approach was robust to detect QR barcodes with rotation and deformation.",
"title": ""
},
{
"docid": "ee5670c36cf9037918ecd176dae3c881",
"text": "This paper focuses on the motion control problem of an omnidirectional mobile robot. A new control method based on the inverse input-output linearized kinematic model is proposed. As the actuator saturation and actuator dynamics have important impacts on the robot performance, this control law takes into account these two aspects and guarantees the stability of the closed-loop control system. Real-world experiments with an omnidirectional middle-size RoboCup robot verifies the performance of this proposed control algorithm.",
"title": ""
},
{
"docid": "82a1285063aadcebd386fac6cb5214f0",
"text": "Programs that take highly-structured files as inputs normally process inputs in stages: syntax parsing, semantic checking, and application execution. Deep bugs are often hidden in the application execution stage, and it is non-trivial to automatically generate test inputs to trigger them. Mutation-based fuzzing generates test inputs by modifying well-formed seed inputs randomly or heuristically. Most inputs are rejected at the early syntax parsing stage. Differently, generation-based fuzzing generates inputs from a specification (e.g., grammar). They can quickly carry the fuzzing beyond the syntax parsing stage. However, most inputs fail to pass the semantic checking (e.g., violating semantic rules), which restricts their capability of discovering deep bugs. In this paper, we propose a novel data-driven seed generation approach, named Skyfire, which leverages the knowledge in the vast amount of existing samples to generate well-distributed seed inputs for fuzzing programs that process highly-structured inputs. Skyfire takes as inputs a corpus and a grammar, and consists of two steps. The first step of Skyfire learns a probabilistic context-sensitive grammar (PCSG) to specify both syntax features and semantic rules, and then the second step leverages the learned PCSG to generate seed inputs. We fed the collected samples and the inputs generated by Skyfire as seeds of AFL to fuzz several open-source XSLT and XML engines (i.e., Sablotron, libxslt, and libxml2). The results have demonstrated that Skyfire can generate well-distributed inputs and thus significantly improve the code coverage (i.e., 20% for line coverage and 15% for function coverage on average) and the bug-finding capability of fuzzers. We also used the inputs generated by Skyfire to fuzz the closed-source JavaScript and rendering engine of Internet Explorer 11. Altogether, we discovered 19 new memory corruption bugs (among which there are 16 new vulnerabilities and received 33.5k USD bug bounty rewards) and 32 denial-of-service bugs.",
"title": ""
},
{
"docid": "3b1ad3d3b05cf4ec1241f2885719aedd",
"text": "A specific training program emphasizing the neuromuscular recruitment of the plantar intrinsic foot muscles, colloquially referred to as \"short foot\" exercise (SFE) training, has been suggested as a means to dynamically support the medial longitudinal arch (MLA) during functional tasks. A single-group repeated measures pre- and post-intervention study design was utilized to determine if a 4-week intrinsic foot muscle training program would impact the amount of navicular drop (ND), increase the arch height index (AHI), improve performance during a unilateral functional reaching maneuver, or the qualitative assessment of the ability to hold the arch position in single limb stance position in an asymptomatic cohort. 21 asymptomatic subjects (42 feet) completed the 4-week SFE training program. Subject ND decreased by a mean of 1.8 mm at 4 weeks and 2.2 mm at 8 weeks (p < 0.05). AHI increased from 28 to 29% (p < 0.05). Intrinsic foot muscle performance during a static unilateral balancing activity improved from a grade of fair to good (p < 0.001) and subjects experienced a significant improvement during a functional balance and reach task in all directions with the exception of an anterior reach (p < 0.05). This study offers preliminary evidence to suggest that SFE training may have value in statically and dynamically supporting the MLA. Further research regarding the value of this exercise intervention in foot posture type or pathology specific patient populations is warranted.",
"title": ""
},
{
"docid": "7a01ddcc25e25e64b231fcee2c8b96b3",
"text": "In the following pages, I shall demonstrate that there is a psychological technique which makes it possible to interpret dreams, and that on the application of this technique, every dream will reveal itself as a psychological structure, full of significance, and one which may be assigned to a specific place in the psychic activities of the waking state. Further, I shall endeavour to elucidate the processes which underlie the strangeness and obscurity of dreams, and to deduce from these processes the nature of the psychic forces whose conflict or co-operation is responsible for our dreams.",
"title": ""
},
{
"docid": "e805148f883204562e25a052d6b35505",
"text": "In patients with chronic stroke, the primary motor cortex of the intact hemisphere (M1(intact hemisphere)) may influence functional recovery, possibly through transcallosal effects exerted over M1 in the lesioned hemisphere (M1(lesioned hemisphere)). Here, we studied interhemispheric inhibition (IHI) between M1(intact hemisphere) and M1(lesioned hemisphere) in the process of generation of a voluntary movement by the paretic hand in patients with chronic subcortical stroke and in healthy volunteers. IHI was evaluated in both hands preceding the onset of unilateral voluntary index finger movements (paretic hand in patients, right hand in controls) in a simple reaction time paradigm. IHI at rest and shortly after the Go signal were comparable in patients and controls. Closer to movement onset, IHI targeting the moving index finger turned into facilitation in controls but remained deep in patients, a finding that correlated with poor motor performance. These results document an abnormally high interhemispheric inhibitory drive from M1(intact hemisphere) to M1(lesioned hemisphere) in the process of generation of a voluntary movement by the paretic hand. It is conceivable that this abnormality could adversely influence motor recovery in some patients with subcortical stroke, an interpretation consistent with models of interhemispheric competition in motor and sensory systems.",
"title": ""
},
{
"docid": "72fb6765b43f47abc129c073bfdcdba5",
"text": "The General Data Protection Regulation (GDPR) is a European Union regulation that will replace the existing Data Protection Directive on 25 May 2018. The most significant change is a huge increase in the maximum fine that can be levied for breaches of the regulation. Yet fewer than half of UK companies are fully aware of GDPR—and a number of those who were preparing for it stopped doing so when the Brexit vote was announced. A last-minute rush to become compliant is therefore expected, and numerous companies are starting to offer advice, checklists and consultancy on how to comply with GDPR. In such an environment, artificial intelligence technologies ought to be able to assist by providing best advice; asking all and only the relevant questions; monitoring activities; and carrying out assessments. The paper considers four areas of GDPR compliance where rule based technologies and/or machine learning techniques may be relevant: Following compliance checklists and codes of conduct; Supporting risk assessments; Complying with the new regulations regarding technologies that perform automatic profiling; Complying with the new regulations concerning recognising and reporting breaches of security. It concludes that AI technology can support each of these four areas. The requirements that GDPR (or organisations that need to comply with GDPR) state for explanation and justification of reasoning imply that rule-based approaches are likely to be more helpful than machine learning approaches. However, there may be good business reasons to take a different approach in some circumstances.",
"title": ""
},
{
"docid": "a95c840844cb869d546a372e9deb065a",
"text": "Abstract—All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture’s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust ScaleInvariant Feature Transform (SR-SIFT) algorithm. Our proposed SRSIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms.",
"title": ""
},
{
"docid": "7ca1c9096c6176cb841ae7f0e7262cb7",
"text": "“Industry 4.0” is recognized as the future of industrial production in which concepts as Smart Factory and Decentralized Decision Making are fundamental. This paper proposes a novel strategy to support decentralized decision, whilst identifying opportunities and challenges of Industry 4.0 contextualizing the potential that represents industrial digitalization and how technological advances can contribute for a new perspective on manufacturing production. It is analysed a set of barriers to the full implementation of Industry 4.0 vision, identifying areas in which decision support is vital. Then, for each of the identified areas, the authors propose a strategy, characterizing it together with the level of complexity that is involved in the different processes. The strategies proposed are derived from the needs of two of Industry 4.0 main characteristics: horizontal integration and vertical integration. For each case, decision approaches are proposed concerning the type of decision required (strategic, tactical, operational and real-time). Validation results are provided together with a discussion on the main challenges that might be an obstacle for a successful decision strategy.",
"title": ""
},
{
"docid": "cca61271fe31513cb90c2ac7ecb0b708",
"text": "This paper deals with the synthesis of fuzzy state feedback controller of induction motor with optimal performance. First, the Takagi-Sugeno (T-S) fuzzy model is employed to approximate a non linear system in the synchronous d-q frame rotating with electromagnetic field-oriented. Next, a fuzzy controller is designed to stabilise the induction motor and guaranteed a minimum disturbance attenuation level for the closed-loop system. The gains of fuzzy control are obtained by solving a set of Linear Matrix Inequality (LMI). Finally, simulation results are given to demonstrate the controller’s effectiveness. Keywords—Rejection disturbance, fuzzy modelling, open-loop control, Fuzzy feedback controller, fuzzy observer, Linear Matrix Inequality (LMI)",
"title": ""
},
{
"docid": "d7de2d835ce5a9f973a41b6f70a41512",
"text": "This study addresses generating counterfactual explanations with multimodal information. Our goal is not only to classify a video into a specific category, but also to provide explanations on why it is not predicted as part of a specific class with a combination of visual-linguistic information. Requirements that the expected output should satisfy are referred to as counterfactuality in this paper: (1) Compatibility of visual-linguistic explanations, and (2) Positiveness/negativeness for the specific positive/negative class. Exploiting a spatio-temporal region (tube) and an attribute as visual and linguistic explanations respectively, the explanation model is trained to predict the counterfactuality for possible combinations of multimodal information in a posthoc manner. The optimization problem, which appears during the training/inference process, can be efficiently solved by inserting a novel neural network layer, namely the maximum subpath layer. We demonstrated the effectiveness of this method by comparison with a baseline of the actionrecognition datasets extended for this task. Moreover, we provide information-theoretical insight into the proposed method.",
"title": ""
},
{
"docid": "accbfd3c4caade25329a2a5743559320",
"text": "PURPOSE\nThe purpose of this investigation was to assess the frequency of complications of third molar surgery, both intraoperatively and postoperatively, specifically for patients 25 years of age or older.\n\n\nMATERIALS AND METHODS\nThis prospective study evaluated 3,760 patients, 25 years of age or older, who were to undergo third molar surgery by oral and maxillofacial surgeons practicing in the United States. The predictor variables were categorized as demographic (age, gender), American Society of Anesthesiologists classification, chronic conditions and medical risk factors, and preoperative description of third molars (present or absent, type of impaction, abnormalities or association with pathology). Outcome variables were intraoperative and postoperative complications, as well as quality of life issues (days of work missed or normal activity curtailed). Frequencies for data collected were tabulated.\n\n\nRESULTS\nThe sample was provided by 63 surgeons, and was composed of 3,760 patients with 9,845 third molars who were 25 years of age or older, of which 8,333 third molars were removed. Alveolar osteitis was the most frequently encountered postoperative problem (0.2% to 12.7%). Postoperative inferior alveolar nerve anesthesia/paresthesia occurred with a frequency of 1.1% to 1.7%, while lingual nerve anesthesia/paresthesia was calculated as 0.3%. All other complications also occurred with a frequency of less than 1%.\n\n\nCONCLUSION\nThe findings of this study indicate that third molar surgery in patients 25 years of age or older is associated with minimal morbidity, a low incidence of postoperative complications, and minimal impact on the patients quality of life.",
"title": ""
},
{
"docid": "047949b0dba35fb11f9f3b716893701d",
"text": "Many state-of-the-art segmentation algorithms rely on Markov or Conditional Random Field models designed to enforce spatial and global consistency constraints. This is often accomplished by introducing additional latent variables to the model, which can greatly increase its complexity. As a result, estimating the model parameters or computing the best maximum a posteriori (MAP) assignment becomes a computationally expensive task. In a series of experiments on the PASCAL and the MSRC datasets, we were unable to find evidence of a significant performance increase attributed to the introduction of such constraints. On the contrary, we found that similar levels of performance can be achieved using a much simpler design that essentially ignores these constraints. This more simple approach makes use of the same local and global features to leverage evidence from the image, but instead directly biases the preferences of individual pixels. While our investigation does not prove that spatial and consistency constraints are not useful in principle, it points to the conclusion that they should be validated in a larger context.",
"title": ""
},
{
"docid": "1fe147f8da415604adab04f74dddf819",
"text": "Today, with the use of Internet, a huge volume of data been generated in the form of transactions, logs etc. As assessed, 90% of total volume of data generated since evaluation of Computers is from last 3 years only. It’s because of advancements in Data storage, global connectivity with Internet high speed, mobile applications usage and IoT. BigData Technologies aims at processing the BigData for deriving trend analysis and business usage from its BigData information. This paper highlights some of the security concerns that Hadoop implemented in its current version and need for some of the enhancements along with a new methodology such as Electronic Currency (BitCoin) and BlockChain functionality. And also emphasises on why and how BitCoin and BlockChain can fit in Hadoop Eco-Systems and their possible advantages and disadvantages. Especially, in validating and authorizing business transactions with some mathematical cryptographic techniques like hashcode with the help of BlockChain Miners.",
"title": ""
},
{
"docid": "1e17455be47fd697a085c8006f5947e9",
"text": "We present a simple, but surprisingly effective, method of self-training a twophase parser-reranker system using readily available unlabeled data. We show that this type of bootstrapping is possible for parsing when the bootstrapped parses are processed by a discriminative reranker. Our improved model achieves an f -score of 92.1%, an absolute 1.1% improvement (12% error reduction) over the previous best result for Wall Street Journal parsing. Finally, we provide some analysis to better understand the phenomenon.",
"title": ""
},
{
"docid": "007791833b15bd3367c11bb17b7abf82",
"text": "When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.",
"title": ""
},
{
"docid": "5ee544ed19ef78fa9212caea791ac4cf",
"text": "This paper describes the ecosystem of R add-on packages deve lop d around the infrastructure provided by the packagearules. The packages provide comprehensive functionality for ana lyzing interesting patterns including frequent itemsets, associ ati n rules, frequent sequences and for building applications like associative classification. After di scussing the ecosystem’s design we illustrate the ease of mining and visualizing rules with a short example .",
"title": ""
},
{
"docid": "9567d1018a7d0fcfb9616d522214f44c",
"text": "We discuss a physical implementation of the Crystalline robot system. Crystalline robots consist of modules that can aggregate together to form distributed robot systems. Crystalline modules are actuated by expanding and contracting each unit. This actuation mechanism permits automated shape metamorphosis. We describe the Crystalline module concept and a physical implementation of a robot system with ten units. We describe experiments with this robot.",
"title": ""
},
{
"docid": "cc5746a332cca808cc0e35328eecd993",
"text": "This paper investigates the relationship between corporate social responsibility (CSR) and the economic performance of corporations. It first examines the theories that suggest a relationship between the two. To test these theories, measures of CSR performance and disclosure developed by the New Consumer Group were analysed against the (past, concurrent and subsequent to CSR performance period) economic performance of 56 large UK companies. Economic performance included: financial (return on capital employed, return on equity and gross profit to sales ratios); and capital market performance (systematic risk and excess market valuation). The results supported the conclusion that (past, concurrent and subsequent) economic performance is related to both CSR performance and disclosure. However, the relationships were weak and lacked an overall consistency. For example, past economic performance was found to partly explain variations in firms’ involvement in philanthropic activities. CSR disclosure was affected (positively) by both a firm’s CSR performance and its concurrent financial performance. Involvement in environmental protection activities was found to be negatively correlated with subsequent financial performance. Whereas, a firm’s policies regarding women’s positions seem to be more rewarding in terms of positive capital market responses (performance) in the subsequent period. Donations to the Conservative Party were found not to be related to companies’ (past, concurrent or subsequent) financial and/or capital performance. operation must fall within the guidelines set by society; and • businesses act as moral agents within",
"title": ""
}
] |
scidocsrr
|
bedd771bc6d2a805c72aa585df3d7340
|
Reviewing CS1 exam question content
|
[
{
"docid": "05c82f9599b431baa584dd1e6d7dfc3e",
"text": "It is a common conception that CS1 is a very difficult course and that failure rates are high. However, until now there has only been anecdotal evidence for this claim. This article reports on a survey among institutions around the world regarding failure rates in introductory programming courses. The article describes the design of the survey and the results. The number of institutions answering the call for data was unfortunately rather low, so it is difficult to make firm conclusions. It is our hope that this article can be the starting point for a systematic collection of data in order to find solid proof of the actual failure and pass rates of CS1.",
"title": ""
}
] |
[
{
"docid": "9e1c3d4a8bbe211b85b19b38e39db28e",
"text": "This paper presents a novel context-based scene recognition method that enables mobile robots to recognize previously observed topological places in known environments or categorize previously unseen places in new environments. We achieve this by introducing the Histogram of Oriented Uniform Patterns (HOUP), which provides strong discriminative power for place recognition, while offering a significant level of generalization for place categorization. HOUP descriptors are used for image representation within a subdivision framework, where the size and location of sub-regions are determined using an informative feature selection method based on kernel alignment. Further improvement is achieved by developing a similarity measure that accounts for perceptual aliasing to eliminate the effect of indistinctive but visually similar regions that are frequently present in outdoor and indoor scenes. An extensive set of experiments reveals the excellent performance of our method on challenging categorization and recognition tasks. Specifically, our proposed method outperforms the current state of the art on two place categorization datasets with 15 and 5 place categories, and two topological place recognition datasets, with 5 and 27 places.",
"title": ""
},
{
"docid": "853edc6c6564920d0d2b69e0e2a63ad0",
"text": "This study evaluates the environmental performance and discounted costs of the incineration and landfilling of municipal solid waste that is ready for the final disposal while accounting for existing waste diversion initiatives, using the life cycle assessment (LCA) methodology. Parameters such as changing waste generation quantities, diversion rates and waste composition were also considered. Two scenarios were assessed in this study on how to treat the waste that remains after diversion. The first scenario is the status quo, where the entire residual waste was landfilled whereas in the second scenario approximately 50% of the residual waste was incinerated while the remainder is landfilled. Electricity was produced in each scenario. Data from the City of Toronto was used to undertake this study. Results showed that the waste diversion initiatives were more effective in reducing the organic portion of the waste, in turn, reducing the net electricity production of the landfill while increasing the net electricity production of the incinerator. Therefore, the scenario that incorporated incineration performed better environmentally and contributed overall to a significant reduction in greenhouse gas emissions because of the displacement of power plant emissions; however, at a noticeably higher cost. Although landfilling proves to be the better financial option, it is for the shorter term. The landfill option would require the need of a replacement landfill much sooner. The financial and environmental effects of this expenditure have yet to be considered.",
"title": ""
},
{
"docid": "fa855a3d92bf863c33b269383ddde081",
"text": "A network supporting deep unsupervised learning is present d. The network is an autoencoder with lateral shortcut connections from the enc oder to decoder at each level of the hierarchy. The lateral shortcut connections al low the higher levels of the hierarchy to focus on abstract invariant features. Wher eas autoencoders are analogous to latent variable models with a single layer of st ochastic variables, the proposed network is analogous to hierarchical latent varia bles models. Learning combines denoising autoencoder and denoising sou rces separation frameworks. Each layer of the network contributes to the cos t function a term which measures the distance of the representations produce d by the encoder and the decoder. Since training signals originate from all leve ls of the network, all layers can learn efficiently even in deep networks. The speedup offered by cost terms from higher levels of the hi erarchy and the ability to learn invariant features are demonstrated in exp eriments.",
"title": ""
},
{
"docid": "d42aaf5c7c4f7982c1630e7b95b0377a",
"text": "In this paper we analyze our recent research on the use of document analysis techniques for metadata extraction from PDF papers. We describe a package that is designed to extract basic metadata from these documents. The package is used in combination with a digital library software suite to easily build personal digital libraries. The proposed software is based on a suitable combination of several techniques that include PDF parsing, low level document image processing, and layout analysis. In addition, we use the information gathered from a widely known citation database (DBLP) to assist the tool in the difficult task of author identification. The system is tested on some paper collections selected from recent conference proceedings.",
"title": ""
},
{
"docid": "6c81b1fe36a591b3b86a5e912a8792c1",
"text": "Mobile phones, sensors, patients, hospitals, researchers, providers and organizations are nowadays, generating huge amounts of healthcare data. The real challenge in healthcare systems is how to find, collect, analyze and manage information to make people's lives healthier and easier, by contributing not only to understand new diseases and therapies but also to predict outcomes at earlier stages and make real-time decisions. In this paper, we explain the potential benefits of big data to healthcare and explore how it improves treatment and empowers patients, providers and researchers. We also describe the ability of reality mining in collecting large amounts of data to understand people's habits, detect and predict outcomes, and illustrate the benefits of big data analytics through five effective new pathways that could be adopted to promote patients' health, enhance medicine, reduce cost and improve healthcare value and quality. We cover some big data solutions in healthcare and we shed light on implementations, such as Electronic Healthcare Record (HER) and Electronic Healthcare Predictive Analytics (e-HPA) in US hospitals. Furthermore, we complete the picture by highlighting some challenges that big data analytics faces in healthcare.",
"title": ""
},
{
"docid": "073f129a34957b19c6d9af96c869b9ab",
"text": "The stability of dc microgrids (MGs) depends on the control strategy adopted for each mode of operation. In an islanded operation mode, droop control is the basic method for bus voltage stabilization when there is no communication among the sources. In this paper, it is shown the consequences of droop implementation on the voltage stability of dc power systems, whose loads are active and nonlinear, e.g., constant power loads. The set of parallel sources and their corresponding transmission lines are modeled by an ideal voltage source in series with an equivalent resistance and inductance. This approximate model allows performing a nonlinear stability analysis to predict the system qualitative behavior due to the reduced number of differential equations. Additionally, nonlinear analysis provides analytical stability conditions as a function of the model parameters and it leads to a design guideline to build reliable (MGs) based on safe operating regions.",
"title": ""
},
{
"docid": "f086fef6b9026a67e73cd6f892aa1c37",
"text": "Shoulder girdle movement is critical for stabilizing and orientating the arm during daily activities. During robotic arm rehabilitation with stroke patients, the robot must assist movements of the shoulder girdle. Shoulder girdle movement is characterized by a highly nonlinear function of the humeral orientation, which is different for each person. Hence it is improper to use pre-calculated shoulder girdle movement. If an exoskeleton robot cannot mimic the patient's shoulder girdle movement well, the robot axes will not coincide with the patient's, which brings reduced range of motion (ROM) and discomfort to the patients. A number of exoskeleton robots have been developed to assist shoulder girdle movement. The shoulder mechanism of these robots, along with the advantages and disadvantages, are introduced. In this paper, a novel shoulder mechanism design of exoskeleton robot is proposed, which can fully mimic the patient's shoulder girdle movement in real time.",
"title": ""
},
{
"docid": "fab33f2e32f4113c87e956e31674be58",
"text": "We consider the problem of decomposing the total mutual information conveyed by a pair of predictor random variables about a target random variable into redundant, uniqueand synergistic contributions. We focus on the relationship be tween “redundant information” and the more familiar information theoretic notions of “common information.” Our main contri bution is an impossibility result. We show that for independent predictor random variables, any common information based measure of redundancy cannot induce a nonnegative decompositi on of the total mutual information. Interestingly, this entai ls that any reasonable measure of redundant information cannot be deri ved by optimization over a single random variable. Keywords—common and private information, synergy, redundancy, information lattice, sufficient statistic, partial information decomposition",
"title": ""
},
{
"docid": "f128c1903831e9310d0ed179838d11d1",
"text": "A partially corporate feeding waveguide located below the radiating waveguide is introduced to a waveguide slot array to enhance the bandwidth of gain. A PMC termination associated with the symmetry of the feeding waveguide as well as uniform excitation is newly proposed for realizing dense and uniform slot arrangement free of high sidelobes. To exploit the bandwidth of the feeding circuit, the 4 × 4-element subarray is also developed for wider bandwidth by using standing-wave excitation. A 16 × 16-element array with uniform excitation is fabricated in the E-band by diffusion bonding of laminated thin copper plates which has the advantages of high precision and high mass-productivity. The antenna gain of 32.4 dBi and the antenna efficiency of 83.0% are measured at the center frequency. The 1 dB-down gain bandwidth is no less than 9.0% and a wideband characteristic is achieved.",
"title": ""
},
{
"docid": "71da7722f6ce892261134bd60ca93ab7",
"text": "Semantically annotated data, using markup languages like RDFa and Microdata, has become more and more publicly available in the Web, especially in the area of e-commerce. Thus, a large amount of structured product descriptions are freely available and can be used for various applications, such as product search or recommendation. However, little efforts have been made to analyze the categories of the available product descriptions. Although some products have an explicit category assigned, the categorization schemes vary a lot, as the products originate from thousands of different sites. This heterogeneity makes the use of supervised methods, which have been proposed by most previous works, hard to apply. Therefore, in this paper, we explain how distantly supervised approaches can be used to exploit the heterogeneous category information in order to map the products to set of target categories from an existing product catalogue. Our results show that, even though this task is by far not trivial, we can reach almost 56% accuracy for classifying products into 37 categories.",
"title": ""
},
{
"docid": "6afe0360f074304e9da9c100e28e9528",
"text": "Unikernels are a promising alternative for application deployment in cloud platforms. They comprise a very small footprint, providing better deployment agility and portability among virtualization platforms. Similar to Linux containers, they are a lightweight alternative for deploying distributed applications based on microservices. However, the comparison of unikernels with other virtualization options regarding the concurrent provisioning of instances, as in the case of microservices-based applications, is still lacking. This paper provides an evaluation of KVM (Virtual Machines), Docker (Containers), and OSv (Unikernel), when provisioning multiple instances concurrently in an OpenStack cloud platform. We confirmed that OSv outperforms the other options and also identified opportunities for optimization.",
"title": ""
},
{
"docid": "6ed5198b9b0364f41675b938ec86456f",
"text": "Artificial intelligence (AI) will have many profound societal effects It promises potential benefits (and may also pose risks) in education, defense, business, law, and science In this article we explore how AI is likely to affect employment and the distribution of income. We argue that AI will indeed reduce drastically the need fol human toil We also note that some people fear the automation of work hy machines and the resulting unemployment Yet, since the majority of us probably would rather use our time for activities other than our present jobs, we ought thus to greet the work-eliminating consequences of AI enthusiastically The paper discusses two reasons, one economic and one psychological, for this paradoxical apprehension We conclude with a discussion of problems of moving toward the kind of economy that will he enahled by developments in AI ARTIFICIAL INTELLIGENCE [Al] and other developments in computer science are giving birth to a dramatically different class of machinesPmachines that can perform tasks requiring reasoning, judgment, and perception that previously could be done only by humans. Will these I am grateful for the helpful comments provided by many people Specifically I would like to acknowledge the advice teceived from Sandra Cook and Victor Walling of SRI, Wassily Leontief and Faye Duchin of the New York University Institute for Economic Analysis, Margaret Boden of The University of Sussex, Henry Levin and Charles Holloway of Stanford University, James Albus of the National Bureau of Standards, and Peter Hart of Syntelligence Herbert Simon, of CarnegieMellon Univetsity, wrote me extensive criticisms and rebuttals of my arguments Robert Solow of MIT was quite skeptical of my premises, but conceded nevertheless that my conclusions could possibly follow from them if certain other economic conditions were satisfied. Save1 Kliachko of SRI improved my composition and also referred me to a prescient article by Keynes (Keynes, 1933) who, a half-century ago, predicted an end to toil within one hundred years machines reduce the need for human toil and thus cause unemployment? There are two opposing views in response to this question Some claim that AI is not really very different from other technologies that have supported automation and increased productivitytechnologies such as mechanical engineering, ele&onics, control engineering, and operations rcsearch. Like them, AI may also lead ultimately to an expanding economy with a concomitant expansion of employment opportunities. At worst, according to this view, thcrc will be some, perhaps even substantial shifts in the types of jobs, but certainly no overall reduction in the total number of jobs. In my opinion, however, such an out,come is based on an overly conservative appraisal of the real potential of artificial intelligence. Others accept a rather strong hypothesis with regard to AI-one that sets AI far apart from previous labor-saving technologies. Quite simply, this hypothesis affirms that anything people can do, AI can do as well. Cert,ainly AI has not yet achieved human-level performance in many important functions, but many AI scientists believe that artificial intelligence inevitably will equal and surpass human mental abilities-if not in twenty years, then surely in fifty. The main conclusion of this view of AI is that, even if AI does create more work, this work can also be performed by AI devices without necessarily implying more jobs for humans Of course, the mcrc fact that some work can be performed automatically does not make it inevitable that it, will be. Automation depends on many factorsPeconomic, political, and social. The major economic parameter would seem to be the relative cost of having either people or machines execute a given task (at a specified rate and level of quality) In THE AI MAGAZINE Summer 1984 5 AI Magazine Volume 5 Number 2 (1984) (© AAAI)",
"title": ""
},
{
"docid": "9b628f47102a0eee67e469e223ece837",
"text": "We present a method for automatically extracting from a running system an indexable signature that distills the essential characteristic from a system state and that can be subjected to automated clustering and similarity-based retrieval to identify when an observed system state is similar to a previously-observed state. This allows operators to identify and quantify the frequency of recurrent problems, to leverage previous diagnostic efforts, and to establish whether problems seen at different installations of the same site are similar or distinct. We show that the naive approach to constructing these signatures based on simply recording the actual ``raw'' values of collected measurements is ineffective, leading us to a more sophisticated approach based on statistical modeling and inference. Our method requires only that the system's metric of merit (such as average transaction response time) as well as a collection of lower-level operational metrics be collected, as is done by existing commercial monitoring tools. Even if the traces have no annotations of prior diagnoses of observed incidents (as is typical), our technique successfully clusters system states corresponding to similar problems, allowing diagnosticians to identify recurring problems and to characterize the ``syndrome'' of a group of problems. We validate our approach on both synthetic traces and several weeks of production traces from a customer-facing geoplexed 24 x 7 system; in the latter case, our approach identified a recurring problem that had required extensive manual diagnosis, and also aided the operators in correcting a previous misdiagnosis of a different problem.",
"title": ""
},
{
"docid": "7121d534b758bab829e1db31d0ce2e43",
"text": "With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack. However this is still an open research problem, and previous research in predicting malicious events only looked at binary outcomes (eg. whether an attack would happen or not), but not at the specific steps that an attacker would undertake. To fill this gap we present Tiresias xspace, a system that leverages Recurrent Neural Networks (RNNs) to predict future events on a machine, based on previous observations. We test Tiresias xspace on a dataset of 3.4 billion security events collected from a commercial intrusion prevention system, and show that our approach is effective in predicting the next event that will occur on a machine with a precision of up to 0.93. We also show that the models learned by Tiresias xspace are reasonably stable over time, and provide a mechanism that can identify sudden drops in precision and trigger a retraining of the system. Finally, we show that the long-term memory typical of RNNs is key in performing event prediction, rendering simpler methods not up to the task.",
"title": ""
},
{
"docid": "aef76a8375b12f4c38391093640a704a",
"text": "Storytelling plays an important role in human life, from everyday communication to entertainment. Interactive storytelling (IS) offers its audience an opportunity to actively participate in the story being told, particularly in video games. Managing the narrative experience of the player is a complex process that involves choices, authorial goals and constraints of a given story setting (e.g., a fairy tale). Over the last several decades, a number of experience managers using artificial intelligence (AI) methods such as planning and constraint satisfaction have been developed. In this paper, we extend existing work and propose a new AI experience manager called player-specific automated storytelling (PAST), which uses automated planning to satisfy the story setting and authorial constraints in response to the player's actions. Out of the possible stories algorithmically generated by the planner in response, the one that is expected to suit the player's style best is selected. To do so, we employ automated player modeling. We evaluate PAST within a video-game domain with user studies and discuss the effects of combining planning and player modeling on the player's perception of agency.",
"title": ""
},
{
"docid": "9086d8f1d9a0978df0bd93cff4bce20a",
"text": "Australian government enterprises have shown a significant interest in the cloud technology-enabled enterprise transformation. Australian government suggests the whole-of-a-government strategy to cloud adoption. The challenge is how best to realise this cloud adoption strategy for the cloud technology-enabled enterprise transformation? The cloud adoption strategy realisation requires concrete guidelines and a comprehensive practical framework. This paper proposes the use of an agile enterprise architecture framework to developing and implementing the adaptive cloud technology-enabled enterprise architecture in the Australian government context. The results of this paper indicate that a holistic strategic agile enterprise architecture approach seems appropriate to support the strategic whole-of-a-government approach to cloud technology-enabled government enterprise transformation.",
"title": ""
},
{
"docid": "e0a314eb1fe221791bc08094d0c04862",
"text": "The present study was undertaken with the objective to explore the influence of the five personality dimensions on the information seeking behaviour of the students in higher educational institutions. Information seeking behaviour is defined as the sum total of all those activities that are usually undertaken by the students of higher education to collect, utilize and process any kind of information needed for their studies. Data has been collected from 600 university students of the three broad disciplines of studies from the Universities of Eastern part of India (West Bengal). The tools used for the study were General Information schedule (GIS), Information Seeking Behaviour Inventory (ISBI) and NEO-FFI Personality Inventory. Product moment correlation has been worked out between the scores in ISBI and those in NEO-FFI Personality Inventory. The findings indicated that the five personality traits are significantly correlated to all the dimensions of information seeking behaviour of the university students.",
"title": ""
},
{
"docid": "a4ed5c4f87e4faa357f0dec0f5c0e354",
"text": "In today's information age, information sharing and transfer has increased exponentially. The threat of an intruder accessing secret information has been an ever existing concern for the data communication experts. Cryptography and steganography are the most widely used techniques to overcome this threat. Cryptography involves converting a message text into an unreadable cipher. On the other hand, steganography embeds message into a cover media and hides its existence. Both these techniques provide some security of data neither of them alone is secure enough for sharing information over an unsecure communication channel and are vulnerable to intruder attacks. Although these techniques are often combined together to achieve higher levels of security but still there is a need of a highly secure system to transfer information over any communication media minimizing the threat of intrusion. In this paper we propose an advanced system of encrypting data that combines the features of cryptography, steganography along with multimedia data hiding. This system will be more secure than any other these techniques alone and also as compared to steganography and cryptography combined systems Visual steganography is one of the most secure forms of steganography available today. It is most commonly implemented in image files. However embedding data into image changes its color frequencies in a predictable way. To overcome this predictability, we propose the concept of multiple cryptography where the data will be encrypted into a cipher and the cipher will be hidden into a multimedia image file in encrypted format. We shall use traditional cryptographic techniques to achieve data encryption and visual steganography algorithms will be used to hide the encrypted data.",
"title": ""
},
{
"docid": "7ba3f13f58c4b25cc425b706022c1f2b",
"text": "Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast/Faster R-CNN [1,2] have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN [2] for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available.",
"title": ""
},
{
"docid": "953d1b368a4a6fb09e6b34e3131d7804",
"text": "The activation of the Deep Convolutional Neural Networks hidden layers can be successfully used as features, often referred as Deep Features, in generic visual similarity search tasks. Recently scientists have shown that permutation-based methods offer very good performance in indexing and supporting approximate similarity search on large database of objects. Permutation-based approaches represent metric objects as sequences (permutations) of reference objects, chosen from a predefined set of data. However, associating objects with permutations might have a high cost due to the distance calculation between the data objects and the reference objects. In this work, we propose a new approach to generate permutations at a very low computational cost, when objects to be indexed are Deep Features. We show that the permutations generated using the proposed method are more effective than those obtained using pivot selection criteria specifically developed for permutation-based methods.",
"title": ""
}
] |
scidocsrr
|
968c51f919d208fb9969e8c42918ad0b
|
Sentence Alignment using Unfolding Recursive Autoencoders
|
[
{
"docid": "3b0b6075cf6cdb13d592b54b85cdf4af",
"text": "We address the problem of sentence alignment for monolingual corpora, a phenomenon distinct from alignment in parallel corpora. Aligning large comparable corpora automatically would provide a valuable resource for learning of text-totext rewriting rules. We incorporate context into the search for an optimal alignment in two complementary ways: learning rules for matching paragraphs using topic structure and further refining the matching through local alignment to find good sentence pairs. Evaluation shows that our alignment method outperforms state-of-the-art systems developed for the same task.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] |
[
{
"docid": "ce08ae4dd55bb290900f49010e219513",
"text": "BACKGROUND\nCurrent antipsychotics have only a limited effect on 2 core aspects of schizophrenia: negative symptoms and cognitive deficits. Minocycline is a second-generation tetracycline that has a beneficial effect in various neurologic disorders. Recent findings in animal models and human case reports suggest its potential for the treatment of schizophrenia. These findings may be linked to the effect of minocycline on the glutamatergic system, through inhibition of nitric oxide synthase and blocking of nitric oxide-induced neurotoxicity. Other proposed mechanisms of action include effects of minocycline on the dopaminergic system and its inhibition of microglial activation.\n\n\nOBJECTIVE\nTo examine the efficacy of minocycline as an add-on treatment for alleviating negative and cognitive symptoms in early-phase schizophrenia.\n\n\nMETHOD\nA longitudinal double-blind, randomized, placebo-controlled design was used, and patients were followed for 6 months from August 2003 to March 2007. Seventy early-phase schizophrenia patients (according to DSM-IV) were recruited and 54 were randomly allocated in a 2:1 ratio to minocycline 200 mg/d. All patients had been initiated on treatment with an atypical antipsychotic < or = 14 days prior to study entry (risperidone, olanzapine, quetiapine, or clozapine; 200-600 mg/d chlorpromazine-equivalent doses). Clinical, cognitive, and functional assessments were conducted, with the Scale for the Assessment of Negative Symptoms (SANS) as the primary outcome measure.\n\n\nRESULTS\nMinocycline was well tolerated, with few adverse events. It showed a beneficial effect on negative symptoms and general outcome (evident in SANS, Clinical Global Impressions scale). A similar pattern was found for cognitive functioning, mainly in executive functions (working memory, cognitive shifting, and cognitive planning).\n\n\nCONCLUSIONS\nMinocycline treatment was associated with improvement in negative symptoms and executive functioning, both related to frontal-lobe activity. Overall, the findings support the beneficial effect of minocycline add-on therapy in early-phase schizophrenia.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00733057.",
"title": ""
},
{
"docid": "2ed433611f5e953760d7d501c977470b",
"text": "Creating multiple layout alternatives for graphical user interfaces to accommodate different screen orientations for mobile devices is labor intensive. Here, we investigate how such layout alternatives can be generated automatically from an initial layout. Providing good layout alternatives can inspire developers in their design work and support them to create adaptive layouts. We performed an analysis of layout alternatives in existing apps and identified common realworld layout transformation patterns. Based on these patterns we developed a prototype that generates landscape and portrait layout alternatives for an initial layout. In general, there is a very large number of possibilities of how widgets can be rearranged. For this reason we developed a classification method to identify and evaluate “good” layout alternatives automatically. From this set of “good” layout alternatives, designers can choose suitable layouts for their applications. In a questionnaire study we verified that our method generates layout alternatives that appear well structured and are easy to use.",
"title": ""
},
{
"docid": "a88dc240c7cbb2570c1fc7c22a813ef3",
"text": "The Acropolis of Athens is one of the most prestigious ancient monuments in the world, attracting daily many visitors, and therefore its structural integrity is of paramount importance. During the last decade an accelerographic array has been installed at the Archaeological Site, in order to monitor the seismic response of the Acropolis Hill and the dynamic behaviour of the monuments (including the Circuit Wall), while several optical fibre sensors have been attached at a middle-vertical section of the Wall. In this study, indicative real time recordings of strain and acceleration on the Wall and the Hill with the use of optical fibre sensors and accelerographs, respectively, are presented and discussed. The records aim to investigate the static and dynamic behaviour – distress of the Wall and the Acropolis Hill, taking also into account the prevailing geological conditions. The optical fibre technology, the location of the sensors, as well as the installation methodology applied is also presented. Emphasis is given to the application of real time instrumental monitoring which can be used as a valuable tool to predict potential structural",
"title": ""
},
{
"docid": "df2f425b5f4c4ad9db16a8d9dc8286ee",
"text": "Radiofrequency Identification (RFID) tags are ever increasing in use, from the monitoring of components to the tracking of produce or livestock during processing & production. They are also widely used in the touch-less technologies seen today in store and payment cards and banking services. With this there has been the ever increasing need to reduce the power required to activate the RFID tag, while maximizing the read range. In addition there is a need to reduce the size of the RFID tags, which are typically embedded in labels and/or cards, in order to make them discreet. In order to maximize read ranges, one needs to ideally match the impedance of the RFID tag antenna to the chip utilized in the tag & ensure that for a particular reader that a minimum threshold power is achieved to activate the tag chip at the required operating frequencies. In this work, we look at the modelling of a physical RFID tag used in a store card and its read ranges obtained from literature, and make comparisons of the model simulations to physical test data. In addition, we take the validated model and find an optimal tag antenna design for a particular application with both size and manufacturing constraints.",
"title": ""
},
{
"docid": "703696ca3af2a485ac34f88494210007",
"text": "Cells navigate environments, communicate and build complex patterns by initiating gene expression in response to specific signals. Engineers seek to harness this capability to program cells to perform tasks or create chemicals and materials that match the complexity seen in nature. This Review describes new tools that aid the construction of genetic circuits. Circuit dynamics can be influenced by the choice of regulators and changed with expression 'tuning knobs'. We collate the failure modes encountered when assembling circuits, quantify their impact on performance and review mitigation efforts. Finally, we discuss the constraints that arise from circuits having to operate within a living cell. Collectively, better tools, well-characterized parts and a comprehensive understanding of how to compose circuits are leading to a breakthrough in the ability to program living cells for advanced applications, from living therapeutics to the atomic manufacturing of functional materials.",
"title": ""
},
{
"docid": "b79575908a84a015c8a83d35c63e4f06",
"text": "This study examines the relation between stress and illness among bus drivers in a large American city. Several factors are identified that predict stress-related ill health for this occupational group. Canonical correlation techniques are used to combine daily work stress and recent stressful life events into a single life/work stress variate. Likewise, somatic symptoms and serious illness reports are combined into a single canonical illness variate. This procedure simplifies the analysis of multiple stress and illness indicators and also permits the statistical control of potential contaminating influences on stress and illness measures (eg, neuroticism). Discriminant function analysis identified four variables that differentiate bus drivers who get ill under high stress (N = 137) from those who remain healthy under stress (N = 137). Highly stressed and ill bus drivers use more avoidance coping behaviors, report more illness in their family medical histories, are low in the disposition of \"personality hardiness,\" and are also low in social assets. The derived stepwise discriminant function correctly classified 71% of cases in an independent \"hold-out\" sample. These results suggest fruitful areas of attention for health promotion and stress management programs in the public transit industry.",
"title": ""
},
{
"docid": "aee2fa15c3ed5beb9b9161db8fec2a47",
"text": "With the proliferation of MOOCs (Massive Open Online Courses) providers, like Coursera, edX, FutureLearn, UniCampus.ro, NOVAMOOC.uvt.ro or MOOC.ro, it’s a real challenge to find the best learning resource. MOOCBuddy – a MOOC recommender system as a chatbot for Facebook Messenger, based on user’s social media profile and interests, could be a solution. MOOCBuddy is looking like the big trend of 2016, based on the Messenger Platform launched by Facebook in the mid of April 2016. Author",
"title": ""
},
{
"docid": "2ecbaf6755e049d3afe4634ed7ed1c4d",
"text": "We present a modular and extensible approach to integrate noisy measurements from multiple heterogeneous sensors that yield either absolute or relative observations at different and varying time intervals, and to provide smooth and globally consistent estimates of position in real time for autonomous flight. We describe the development of algorithms and software architecture for a new 1.9kg MAV platform equipped with an IMU, laser scanner, stereo cameras, pressure altimeter, magnetometer, and a GPS receiver, in which the state estimation and control are performed onboard on an Intel NUC 3rd generation i3 processor. We illustrate the robustness of our framework in large-scale, indoor-outdoor autonomous aerial navigation experiments involving traversals of over 440 meters at average speeds of 1.5 m/s with winds around 10 mph while entering and exiting buildings.",
"title": ""
},
{
"docid": "1f3e2c432a5f2f1a6ffcf892c6a06eab",
"text": "In this letter, we study the Ramanujan Sums (RS) transform by means of matrix multiplication. The RS are orthogonal in nature and therefore offer excellent energy conservation capability. The 1-D and 2-D forward RS transforms are easy to calculate, but their inverse transforms are not defined in the literature for non-even function <formula formulatype=\"inline\"><tex Notation=\"TeX\">$ ({\\rm mod}~ {\\rm M}) $</tex></formula>. We solved this problem by using matrix multiplication in this letter.",
"title": ""
},
{
"docid": "08c26880862b09e81acc1cd99904fded",
"text": "Efficient use of high speed hardware requires operating system components be customized to the application workload. Our general purpose operating systems are ill-suited for this task. We present EbbRT, a framework for constructing per-application library operating systems for cloud applications. The primary objective of EbbRT is to enable highperformance in a tractable and maintainable fashion. This paper describes the design and implementation of EbbRT, and evaluates its ability to improve the performance of common cloud applications. The evaluation of the EbbRT prototype demonstrates memcached, run within a VM, can outperform memcached run on an unvirtualized Linux. The prototype evaluation also demonstrates an 14% performance improvement of a V8 JavaScript engine benchmark, and a node.js webserver that achieves a 50% reduction in 99th percentile latency compared to it run on Linux.",
"title": ""
},
{
"docid": "3b1bee4155fb7fa948f342e246a6c1c0",
"text": "This article presents a number of complementary algorithms for detecting faults on-board operating robots, where a fault is defined as a deviation from expected behavior. The algorithms focus on faults that cannot directly be detected from current sensor values but require inference from a sequence of time-varying sensor values. Each algorithm provides an independent improvement over the basic approach. These improvements are not mutually exclusive, and the algorithms may be combined to suit the application domain. All the approaches presented require dynamic models representing the behavior of each of the fault and operational states. These models can be built from analytical models of the robot dynamics, data from simulation, or from the real robot. All the approaches presented detect faults from a finite number of known fault conditions, although there may potentially be a very large number of these faults.",
"title": ""
},
{
"docid": "513dd1327dffdb43998e7f4e85fdc817",
"text": "This paper proposes an elastic spatial verification method for Instance Search, particularly for dealing with non-planar and non-rigid queries exhibiting complex spatial transformations. Different from existing models that map keypoints between images based on a linear transformation (e.g., affine, homography), our model exploits the topological arrangement of keypoints to address the non-linear spatial transformations that are extremely common in real life situations. In particular, we propose a novel technique to elastically verify the topological spatial consistency with the triangulated graph through a “sketch-and-match” scheme. The spatial topology configuration, emphasizing relative positioning rather than absolute coordinates, is first sketched by a triangulated graph, whose edges essentially capture the topological layout of the corresponding keypoints. Next, the spatial consistency is efficiently estimated as the number of common edges between the triangulated graphs. Compared to the existing methods, our technique is much more effective in modeling the complex spatial transformations of non-planar and non-rigid instances, while being compatible to instances with simple linear transformations. Moreover, our method is by nature more robust in spatial verification by considering the locations, rather than the local geometry of keypoints, which are sensitive to motions and viewpoint changes. We evaluate our method extensively on three years of TRECVID datasets, as well as our own dataset MQA, showing large improvement over other methods for the task of Instance Search.",
"title": ""
},
{
"docid": "41fe7d2febb05a48daf69b4a41c77251",
"text": "Multi-objective evolutionary algorithms for the construction of neural ensembles is a relatively new area of research. We recently proposed an ensemble learning algorithm called DIVACE (DIVerse and ACcurate Ensemble learning algorithm). It was shown that DIVACE tries to find an optimal trade-off between diversity and accuracy as it searches for an ensemble for some particular pattern recognition task by treating these two objectives explicitly separately. A detailed discussion of DIVACE together with further experimental studies form the essence of this paper. A new diversity measure which we call Pairwise Failure Crediting (PFC) is proposed. This measure forms one of the two evolutionary pressures being exerted explicitly in DIVACE. Experiments with this diversity measure as well as comparisons with previously studied approaches are hence considered. Detailed analysis of the results show that DIVACE, as a concept, has promise. Mathematical Subject Classification (2000): 68T05, 68Q32, 68Q10.",
"title": ""
},
{
"docid": "2d0121e8509d09571d8973da784440a5",
"text": "In this paper we examine the suitability of BPMN for business process modelling, using the Workflow Patterns as an evaluation framework. The Workflow Patterns are a collection of patterns developed for assessing control-flow, data and resource capabilities in the area of Process Aware Information Systems (PAIS). In doing so, we provide a comprehensive evaluation of the capabilities of BPMN, and its strengths and weaknesses when utilised for business process modelling. The analysis provided for BPMN is part of a larger effort aiming at an unbiased and vendor-independent survey of the suitability and the expressive power of some mainstream process modelling languages. It is a sequel to an analysis series where languages like BPEL and UML 2.0 A.D are evaluated.",
"title": ""
},
{
"docid": "c117a5fc0118f3ea6c576bb334759d59",
"text": "While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no attack that perturbs each pixel by at most = 0.1 can cause more than 35% test error.",
"title": ""
},
{
"docid": "8e70aea51194dba675d4c3e88ee6b9ad",
"text": "Trust is central to all transactions and yet economists rarely discuss the notion. It is treated rather as background environment, present whenever called upon, a sort of ever-ready lubricant that permits voluntary participation in production and exchange. In the standard model of a market economy it is taken for granted that consumers meet their budget constraints: they are not allowed to spend more than their wealth. Moreover, they always deliver the goods and services they said they would. But the model is silent on the rectitude of such agents. We are not told if they are persons of honour, conditioned by their upbringing always to meet the obligations they have chosen to undertake, or if there is a background agency which enforces contracts, credibly threatening to mete out punishment if obligations are not fulfilled a punishment sufficiently stiff to deter consumers from ever failing to fulfil them. The same assumptions are made for producers. To be sure, the standard model can be extended to allow for bankruptcy in the face of an uncertain future. One must suppose that there is a special additional loss to becoming bankrupt a loss of honour when honour matters, social and economic ostracism, a term in a debtors’ prison, and so forth. Otherwise, a person may take silly risks or, to make a more subtle point, take insufficient care in managing his affairs, but claim that he ran into genuine bad luck, that it was Mother Nature’s fault and not his own lack of ability or zeal.",
"title": ""
},
{
"docid": "45849edd3c46d92300707c38d0f39a7a",
"text": "Skin cancer is one of the most common malignancies in fair skin population. It can be divided in two main classes: melanoma and non-melanoma skin cancer. This means that pigmented and, also, non-pigmented skin lesions might raise an important risk. Due to the fact that melanoma is more aggressive, pigmented skin lesions gained more attention in terms of automatic diagnosis. One of the most important steps in this procedure is to correctly identify the skin lesion in an image (acquired with a dermoscope or a standard camera). Based on the accurate identification of the lesion, specific automatic algorithms for cancer diagnosis can be developed. In this paper we propose and evaluate an artificial intelligence method for pigmented and non-pigmented lesion segmentation. The method uses generative adversarial neural networks. The network was trained and tested on a large set of images acquired with smartphone cameras. The results show that approximately 92% of the lesions are correctly identified on the test set.",
"title": ""
},
{
"docid": "89189f434e7ffd2110048d43955566de",
"text": "This paper describes two techniques for designing phase-frequency detectors (PFDs) with higher operating frequencies (periods of less than 8x the delay of a fan-out-4 inverter (FO-4)) and faster frequency acquisition. Prototypes designed in 0.25-µm CMOS process exhibit operating frequencies of 1.25 GHz ( = 1/(8 ċ FO-4) ) and 1.5 GHz ( = 1/(6.7 ċ FO-4) ) for two techniques respectively whereas a conventional PFD operates < 1 GHz ( = 1/(10 ċ FO-4) ). The two proposed PFDs achieve a capture range of 1.7x and 1.2x the conventional design.",
"title": ""
},
{
"docid": "a0605a35164bba33c4e74c5a1bf997fa",
"text": "Most of the research on text categorization has focused on classifying text documents into a set of categories with no structural relationships among them (flat classification). However, in many information repositories documents are organized in a hierarchy of categories to support a thematic search by browsing topics of interests. The consideration of the hierarchical relationship among categories opens several additional issues in the development of methods for automated document classification. Questions concern the representation of documents, the learning process, the classification process and the evaluation criteria of experimental results. They are systematically investigated in this paper, whose main contribution is a general hierarchical text categorization framework where the hierarchy of categories is involved in all phases of automated document classification, namely feature selection, learning and classification of a new document. An automated threshold determination method for classification scores is embedded in the proposed framework. It can be applied to any classifier that returns a degree of membership of a document to a category. In this work three learning methods are considered for the construction of document classifiers, namely centroid-based, naïve Bayes and SVM. The proposed framework has been implemented in the system WebClassIII and has been tested on three datasets (Yahoo, DMOZ, RCV1) which present a variety of situations in terms of hierarchical structure. Experimental results are reported and several conclusions are drawn on the comparison of the flat vs. the hierarchical approach as well as on the comparison of different hierarchical classifiers. The paper concludes with a review of related work and a discussion of previous findings vs. our findings.",
"title": ""
},
{
"docid": "29d98961d0ecde875bedcd4cfcb72026",
"text": "The claim that we have a moral obligation, where a choice can be made, to bring to birth the 'best' child possible, has been highly controversial for a number of decades. More recently Savulescu has labelled this claim the Principle of Procreative Beneficence. It has been argued that this Principle is problematic in both its reasoning and its implications, most notably in that it places lower moral value on the disabled. Relentless criticism of this proposed moral obligation, however, has been unable, thus far, to discredit this Principle convincingly and as a result its influence shows no sign of abating. I will argue that while criticisms of the implications and detail of the reasoning behind it are well founded, they are unlikely to produce an argument that will ultimately discredit the obligation that the Principle of Procreative Beneficence represents. I believe that what is needed finally and convincingly to reveal the fallacy of this Principle is a critique of its ultimate theoretical foundation, the notion of impersonal harm. In this paper I argue that while the notion of impersonal harm is intuitively very appealing, its plausibility is based entirely on this intuitive appeal and not on sound moral reasoning. I show that there is another plausible explanation for our intuitive response and I believe that this, in conjunction with the other theoretical criticisms that I and others have levelled at this Principle, shows that the Principle of Procreative Beneficence should be rejected.",
"title": ""
}
] |
scidocsrr
|
39bab4f77ae27b7d60f132efac4d0499
|
How to use attribute-based encryption to implement role-based access control in the cloud
|
[
{
"docid": "d4ee96388ca88c0a5d2a364f826dea91",
"text": "Cloud computing, as an emerging computing paradigm, enables users to remotely store their data into a cloud so as to enjoy scalable services on-demand. Especially for small and medium-sized enterprises with limited budgets, they can achieve cost savings and productivity enhancements by using cloud-based services to manage projects, to make collaborations, and the like. However, allowing cloud service providers (CSPs), which are not in the same trusted domains as enterprise users, to take care of confidential data, may raise potential security and privacy issues. To keep the sensitive user data confidential against untrusted CSPs, a natural way is to apply cryptographic approaches, by disclosing decryption keys only to authorized users. However, when enterprise users outsource confidential data for sharing on cloud servers, the adopted encryption system should not only support fine-grained access control, but also provide high performance, full delegation, and scalability, so as to best serve the needs of accessing data anytime and anywhere, delegating within enterprises, and achieving a dynamic set of users. In this paper, we propose a scheme to help enterprises to efficiently share confidential data on cloud servers. We achieve this goal by first combining the hierarchical identity-based encryption (HIBE) system and the ciphertext-policy attribute-based encryption (CP-ABE) system, and then making a performance-expressivity tradeoff, finally applying proxy re-encryption and lazy re-encryption to our scheme.",
"title": ""
}
] |
[
{
"docid": "f9f0241c02486f6760951d3ac33cc861",
"text": "Clinical evidence indicates that swallowing, a vital function, may be impaired in sleep. To address this issue, we elicited swallows in awake and sleeping adult cats by injecting water through a nasopharyngeal tube. Our results indicate that swallowing occurs not only in non-rapid eye movement (NREM) sleep, but also in rapid eye movement (REM) sleep. In NREM sleep, the injections often caused arousal followed by swallowing, but, in the majority of cases, swallowing occurred in NREM sleep before arousal. These swallows in NREM sleep were entirely comparable to swallows in wakefulness. In contrast, the injections in REM sleep were less likely to cause arousal, and the swallows occurred as hypotonic events. Furthermore, apneas were sometimes elicited by the injections in REM sleep, and there was repetitive swallowing upon arousal. These results suggest that the hypotonic swallows of REM sleep were ineffective.",
"title": ""
},
{
"docid": "747df95d08e6e5b1802dacf4e84b6642",
"text": "One of the key requirement of many schemes is that of random numbers. Sequence of random numbers are used at several stages of a standard cryptographic protocol. A simple example is of a Vernam cipher, where a string of random numbers is added to massage string to generate the encrypted code. It is represented as C = M ⊕ K where M is the message, K is the key and C is the ciphertext. It has been mathematically shown that this simple scheme is unbreakable is key K as long as M and is used only once. For a good cryptosystem, the security of the cryptosystem is not be based on keeping the algorithm secret but solely on keeping the key secret. The quality and unpredictability of secret data is critical to securing communication by modern cryptographic techniques. Generation of such data for cryptographic purposes typically requires an unpredictable physical source of random data. In this manuscript, we present studies of three different methods for producing random number. We have tested them by studying its frequency, correlation as well as using the test suit from NIST.",
"title": ""
},
{
"docid": "0dc3c4e628053e8f7c32c0074a2d1a59",
"text": "Understanding inter-character relationships is fundamental for understanding character intentions and goals in a narrative. This paper addresses unsupervised modeling of relationships between characters. We model relationships as dynamic phenomenon, represented as evolving sequences of latent states empirically learned from data. Unlike most previous work our approach is completely unsupervised. This enables data-driven inference of inter-character relationship types beyond simple sentiment polarities, by incorporating lexical and semantic representations, and leveraging large quantities of raw text. We present three models based on rich sets of linguistic features that capture various cues about relationships. We compare these models with existing techniques and also demonstrate that relationship categories learned by our model are semantically coherent.",
"title": ""
},
{
"docid": "8d56aa104cb727bd6496cac89f1f7d9c",
"text": "In this paper, we develop a semantic annotation technique for location-based social networks to automatically annotate all places with category tags which are a crucial prerequisite for location search, recommendation services, or data cleaning. Our annotation algorithm learns a binary support vector machine (SVM) classifier for each tag in the tag space to support multi-label classification. Based on the check-in behavior of users, we extract features of places from i) explicit patterns (EP) of individual places and ii) implicit relatedness (IR) among similar places. The features extracted from EP are summarized from all check-ins at a specific place. The features from IR are derived by building a novel network of related places (NRP) where similar places are linked by virtual edges. Upon NRP, we determine the probability of a category tag for each place by exploring the relatedness of places. Finally, we conduct a comprehensive experimental study based on a real dataset collected from a location-based social network, Whrrl. The results demonstrate the suitability of our approach and show the strength of taking both EP and IR into account in feature extraction.",
"title": ""
},
{
"docid": "b61c9f69a2fffcf2c3753e51a3bbfa14",
"text": "..............................................................................................................ix 1 Interoperability .............................................................................................1 1.",
"title": ""
},
{
"docid": "db215a998da127466bcb5e80b750cbbb",
"text": "to design and build computing systems capable of running themselves, adjusting to varying circumstances, and preparing their resources to handle most efficiently the workloads we put upon them. These autonomic systems must anticipate needs and allow users to concentrate on what they want to accomplish rather than figuring how to rig the computing systems to get them there. Abtract The performance of current shared-memory multiprocessor systems depends on both the efficient utilization of all the architectural elements in the system (processors, memory, etc), and the workload characteristics. This Thesis has the main goal of improving the execution of workloads of parallel applications in shared-memory multiprocessor systems by using real performance information in the processor scheduling. In multiprocessor systems, users request for resources (processors) to execute their parallel applications. The Operating System is responsible to distribute the available physical resources among parallel applications in the more convenient way for both the system and the application performance. It is a typical practice of users in multiprocessor systems to request for a high number of processors assuming that the higher the processor request, the higher the number of processors allocated, and the higher the speedup achieved by their applications. However, this is not true. Parallel applications have different characteristics with respect to their scalability. Their speedup also depends on run-time parameters such as the influence of the rest of running applications. This Thesis proposes that the system should not base its decisions on the users requests only, but the system must decide, or adjust, its decisions based on real performance information calculated at run-time. The performance of parallel applications is an information that the system can dynamically measure without introducing a significant penalty in the application execution time. Using this information, the processor allocation can be decided, or modified, being robust to incorrect processor requests given by users. We also propose that the system use a target efficiency to ensure the efficient use of processors. This target efficiency is a system parameter and can be dynamically decided as a function of the characteristics of running applications or the number of queued applications. We also propose to coordinate the different scheduling levels that operate in the processor scheduling: the run-time scheduler, the processor scheduler, and the queueing system. We propose to establish an interface between levels to send and receive information, and to take scheduling decisions considering the information provided by the rest of …",
"title": ""
},
{
"docid": "6870efe6d9607c82992b5015a5336969",
"text": "We present an approach to automatically classify clinical text at a sentence level. We are using deep convolutional neural networks to represent complex features. We train the network on a dataset providing a broad categorization of health information. Through a detailed evaluation, we demonstrate that our method outperforms several approaches widely used in natural language processing tasks by about 15%.",
"title": ""
},
{
"docid": "fa7682dc85d868e57527fdb3124b309c",
"text": "The seminal 2003 paper by Cosley, Lab, Albert, Konstan, and Reidl, demonstrated the susceptibility of recommender systems to rating biases. To facilitate browsing and selection, almost all recommender systems display average ratings before accepting ratings from users which has been shown to bias ratings. This effect is called Social Inuence Bias (SIB); the tendency to conform to the perceived \\norm\" in a community. We propose a methodology to 1) learn, 2) analyze, and 3) mitigate the effect of SIB in recommender systems. In the Learning phase, we build a baseline dataset by allowing users to rate twice: before and after seeing the average rating. In the Analysis phase, we apply a new non-parametric significance test based on the Wilcoxon statistic to test whether the data is consistent with SIB. If significant, we propose a Mitigation phase using polynomial regression and the Bayesian Information Criterion (BIC) to predict unbiased ratings. We evaluate our approach on a dataset of 9390 ratings from the California Report Card (CRC), a rating-based system designed to encourage political engagement. We found statistically significant evidence of SIB. Mitigating models were able to predict changed ratings with a normalized RMSE of 12.8% and reduce bias by 76.3%. The CRC, our data, and experimental code are available at: http://californiareportcard.org/data/",
"title": ""
},
{
"docid": "447b689d9c7c2a6b71baf2fac2fa2a4f",
"text": "Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract Various routing protocols, including Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (ISIS), explicitly allow \"Equal-Cost Multipath\" (ECMP) routing. Some router implementations also allow equal-cost multipath usage with RIP and other routing protocols. The effect of multipath routing on a forwarder is that the forwarder potentially has several next-hops for any given destination and must use some method to choose which next-hop should be used for a given data packet.",
"title": ""
},
{
"docid": "d8b0ef94385d1379baeb499622253a02",
"text": "Mining association rules associates events that took place together. In market basket analysis, these discovered rules associate items purchased together. Items that are not part of a transaction are not considered. In other words, typical association rules do not take into account items that are part of the domain but that are not together part of a transaction. Association rules are based on frequencies and count the transactions where items occur together. However, counting absences of items is prohibitive if the number of possible items is very large, which is typically the case. Nonetheless, knowing the relationship between the absence of an item and the presence of another can be very important in some applications. These rules are called negative association rules. We review current approaches for mining negative association rules and we discuss limitations and future research directions.",
"title": ""
},
{
"docid": "81c2fca06af30c27e74267dbccd84080",
"text": "Instability and variability of Deep Reinforcement Learning (DRL) algorithms tend to adversely affect their performance. Averaged-DQN is a simple extension to the DQN algorithm, based on averaging previously learned Q-values estimates, which leads to a more stable training procedure and improved performance by reducing approximation error variance in the target values. To understand the effect of the algorithm, we examine the source of value function estimation errors and provide an analytical comparison within a simplified model. We further present experiments on the Arcade Learning Environment benchmark that demonstrate significantly improved stability and performance due to the proposed extension.",
"title": ""
},
{
"docid": "92e186ba05566110020ed92df960f3d5",
"text": "From just a glance, humans can make rich predictions about the future state of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains and require direct measurements of the underlying states. We introduce the Visual Interaction Network, a general-purpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual front-end learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions and dynamics, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. Our results demonstrate that the perceptual module and the object-based dynamics predictor module can induce factored latent representations that support accurate dynamical predictions. This work opens new opportunities for model-based decision-making and planning from raw sensory observations in complex physical environments.",
"title": ""
},
{
"docid": "766b18cdae33d729d21d6f1b2b038091",
"text": "1.1 Terminology Intercultural communication or communication between people of different cultural backgrounds has always been and will probably remain an important precondition of human co-existance on earth. The purpose of this paper is to provide a framework of factors thatare important in intercultural communication within a general model of human, primarily linguistic, communication. The term intercultural is chosen over the largely synonymousterm cross-cultural because it is linked to language use such as “interdisciplinary”, that is cooperation between people with different scientific backgrounds. Perhaps the term also has somewhat fewer connotations than crosscultural. It is not cultures that communicate, whatever that might imply, but people (and possibly social institutions) with different cultural backgrounds that do. In general, the term”cross-cultural” is probably best used for comparisons between cultures (”crosscultural comparison”).",
"title": ""
},
{
"docid": "4c3b4a6c173a40327c2db17772cbd242",
"text": "We reproduce four Twitter sentiment classification approaches that participated in previous SemEval editions with diverse feature sets. The reproduced approaches are combined in an ensemble, averaging the individual classifiers’ confidence scores for the three classes (positive, neutral, negative) and deciding sentiment polarity based on these averages. The experimental evaluation on SemEval data shows our re-implementations to slightly outperform their respective originals. Moreover, not too surprisingly, the ensemble of the reproduced approaches serves as a strong baseline in the current edition where it is top-ranked on the 2015 test set.",
"title": ""
},
{
"docid": "1d56b3aa89484e3b25557880ec239930",
"text": "We present an FPGA accelerator for the Non-uniform Fast Fourier Transform, which is a technique to reconstruct images from arbitrarily sampled data. We accelerate the compute-intensive interpolation step of the NuFFT Gridding algorithm by implementing it on an FPGA. In order to ensure efficient memory performance, we present a novel FPGA implementation for Geometric Tiling based sorting of the arbitrary samples. The convolution is then performed by a novel Data Translation architecture which is composed of a multi-port local memory, dynamic coordinate-generator and a plug-and-play kernel pipeline. Our implementation is in single-precision floating point and has been ported onto the BEE3 platform. Experimental results show that our FPGA implementation can generate fairly high performance without sacrificing flexibility for various data-sizes and kernel functions. We demonstrate up to 8X speedup and up to 27 times higher performance-per-watt over a comparable CPU implementation and up to 20% higher performance-per-watt when compared to a relevant GPU implementation.",
"title": ""
},
{
"docid": "1d88a06a34beff2c3e926a6d24f70036",
"text": "Graph-based clustering methods perform clustering on a fixed input data graph. If this initial construction is of low quality then the resulting clustering may also be of low quality. Moreover, existing graph-based clustering methods require post-processing on the data graph to extract the clustering indicators. We address both of these drawbacks by allowing the data graph itself to be adjusted as part of the clustering procedure. In particular, our Constrained Laplacian Rank (CLR) method learns a graph with exactly k connected components (where k is the number of clusters). We develop two versions of this method, based upon the L1-norm and the L2-norm, which yield two new graph-based clustering objectives. We derive optimization algorithms to solve these objectives. Experimental results on synthetic datasets and real-world benchmark datasets exhibit the effectiveness of this new graph-based clustering method. Introduction State-of-the art clustering methods are often based on graphical representations of the relationships among data points. For example, spectral clustering (Ng, Jordan, and Weiss 2001), normalized cut (Shi and Malik 2000) and ratio cut (Hagen and Kahng 1992) all transform the data into a weighted, undirected graph based on pairwise similarities. Clustering is then accomplished by spectral or graphtheoretic optimization procedures. See (Ding and He 2005; Li and Ding 2006) for a discussion of the relations among these graph-based methods, and also the connections to nonnegative matrix factorization. All of these methods involve a two-stage process in which an data graph is formed from the data, and then various optimization procedures are invoked on this fixed input data graph. A disadvantage of this two-stage process is that the final clustering structures are not represented explicitly in the data graph (e.g., graph-cut methods often use K-means algorithm to post-process the ∗To whom all correspondence should be addressed. This work was partially supported by US NSF-IIS 1117965, NSFIIS 1302675, NSF-IIS 1344152, NSF-DBI 1356628, NIH R01 AG049371. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. results to get the clustering indicators); also, the clustering results are dependent on the quality of the input data graph (i.e., they are sensitive to the particular graph construction methods). It seems plausible that a strategy in which the optimization phase is allowed to change the data graph could have advantages relative to the two-phase strategy. In this paper we propose a novel graph-based clustering model that learns a graph with exactly k connected components (where k is the number of clusters). In our new model, instead of fixing the input data graph associated to the affinity matrix, we learn a new data similarity matrix that is a block diagonal matrix and has exactly k connected components—the k clusters. Thus, our new data similarity matrix is directly useful for the clustering task; the clustering results can be immediately obtained without requiring any post-processing to extract the clustering indicators. To achieve such ideal clustering structures, we impose a rank constraint on the Laplacian graph of the new data similarity matrix, thereby guaranteeing the existence of exactly k connected components. Considering both L2-norm and L1norm objectives, we propose two new clustering objectives and derive optimization algorithms to solve them. We also introduce a novel graph-construction method to initialize the graph associated with the affinity matrix. We conduct empirical studies on simulated datasets and seven real-world benchmark datasets to validate our proposed methods. The experimental results are promising— we find that our new graph-based clustering method consistently outperforms other related methods in most cases. Notation: Throughout the paper, all the matrices are written as uppercase. For a matrix M , the i-th row and the ij-th element of M are denoted by mi and mij , respectively. The trace of matrix M is denoted by Tr(M). The L2-norm of vector v is denoted by ‖v‖2, the Frobenius and the L1 norm of matrix M are denoted by ‖M‖F and ‖M‖1, respectively. New Clustering Formulations Graph-based clustering approaches typically optimize their objectives based on a given data graph associated with an affinity matrix A ∈ Rn×n (which can be symmetric or nonsymmetric), where n is the number of nodes (data points) in the graph. There are two drawbacks with these approaches: (1) the clustering performance is sensitive to the quality of the data graph construction; (2) the cluster structures are not explicit in the clustering results and a post-processing step is needed to uncover the clustering indicators. To address these two challenges, we aim to learn a new data graph S based on the given data graph A such that the new data graph is more suitable for the clustering task. In our strategy, we propose to learn a new data graph S that has exactly k connected components, where k is the number of clusters. In order to formulate a clustering objective based on this strategy, we start from the following theorem. If the affinity matrix A is nonnegative, then the Laplacian matrix LA = DA − (A + A)/2, where the degree matrix DA ∈ Rn×n is defined as a diagonal matrix whose i-th diagonal element is ∑ j(aij + aji)/2, has the following important property (Mohar 1991; Chung 1997): Theorem 1 The multiplicity k of the eigenvalue zero of the Laplacian matrix LA is equal to the number of connected components in the graph associated with A. Given a graph with affinity matrix A, Theorem 1 indicates that if rank(LA) = n − k, then the graph is an ideal graph based on which we already partition the data points into k clusters, without the need of performing K-means or other discretization procedures as is necessary with traditional graph-based clustering methods such as spectral clustering. Motivated by Theorem 1, given an initial affinity matrix A ∈ Rn×n, we learn a similarity matrix S ∈ Rn×n such that the corresponding Laplacian matrix LS = DS−(S+S)/2 is constrained to be rank(LS) = n − k. Under this constraint, the learned S is block diagonal with proper permutation, and thus we can directly partition the data points into k clusters based on S (Nie, Wang, and Huang 2014). To avoid the case that some rows of S are all zeros, we further constrain the S such that the sum of each row of S is one. Under these constraints, we learn that S that best approximates the initial affinity matrixA. Considering two different distances, the L2-norm and the L1-norm, between the given affinity matrix A and the learned similarity matrix S, we define the Constrained Laplacian Rank (CLR) for graph-based clustering as the solution to the following optimization problem: JCLR L2 = min ∑ j sij=1,sij≥0,rank(LS)=n−k ‖S −A‖2F (1) JCLR L1 = min ∑ j sij=1,sij≥0,rank(LS)=n−k ‖S −A‖1. (2) These problems seem very difficult to solve since LS = DS − (S +S)/2, and DS also depends on S, and the constraint rank(LS) = n−k is a complex nonlinear constraint. In the next section, we will propose novel and efficient algorithms to solve these problems. Optimization Algorithms Optimization Algorithm for Solving JCLR L2 in Eq. (1) Let σi(LS) denote the i-th smallest eigenvalue of LS . Note that σi(LS) ≥ 0 because LS is positive semidefinite. The problem (1) is equivalent to the following problem for a large enough value of λ: min ∑ j sij=1,sij≥0 ‖S −A‖2F + 2λ k ∑",
"title": ""
},
{
"docid": "0102748c7f9969fb53a3b5ee76b6eefe",
"text": "Face veri cation is the task of deciding by analyzing face images, whether a person is who he/she claims to be. This is very challenging due to image variations in lighting, pose, facial expression, and age. The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face veri cation accuracy. In this paper we propose a new method, named the Cosine Similarity Metric Learning (CSML) for learning a distance metric for facial veri cation. The use of cosine similarity in our method leads to an e ective learning algorithm which can improve the generalization ability of any given metric. Our method is tested on the state-of-the-art dataset, the Labeled Faces in the Wild (LFW), and has achieved the highest accuracy in the literature. Face veri cation has been extensively researched for decades. The reason for its popularity is the non-intrusiveness and wide range of practical applications, such as access control, video surveillance, and telecommunication. The biggest challenge in face veri cation comes from the numerous variations of a face image, due to changes in lighting, pose, facial expression, and age. It is a very di cult problem, especially using images captured in totally uncontrolled environment, for instance, images from surveillance cameras, or from the Web. Over the years, many public face datasets have been created for researchers to advance state of the art and make their methods comparable. This practice has proved to be extremely useful. FERET [1] is the rst popular face dataset freely available to researchers. It was created in 1993 and since then research in face recognition has advanced considerably. Researchers have come very close to fully recognizing all the frontal images in FERET [2,3,4,5,6]. However, these methods are not robust to deal with non-frontal face images. Recently a new face dataset named the Labeled Faces in the Wild (LFW) [7] was created. LFW is a full protocol for evaluating face veri cation algorithms. Unlike FERET, LFW is designed for unconstrained face veri cation. Faces in LFW can vary in all possible ways due to pose, lighting, expression, age, scale, and misalignment (Figure 1). Methods for frontal images cannot cope with these variations and as such many researchers have turned to machine learning to 2 Hieu V. Nguyen and Li Bai Fig. 1. From FERET to LFW develop learning based face veri cation methods [8,9]. One of these approaches is to learn a transformation matrix from the data so that the Euclidean distance can perform better in the new subspace. Learning such a transformation matrix is equivalent to learning a Mahalanobis metric in the original space [10]. Xing et al. [11] used semide nite programming to learn a Mahalanobis distance metric for clustering. Their algorithm aims to minimize the sum of squared distances between similarly labeled inputs, while maintaining a lower bound on the sum of distances between di erently labeled inputs. Goldberger et al. [10] proposed Neighbourhood Component Analysis (NCA), a distance metric learning algorithm especially designed to improve kNN classi cation. The algorithm is to learn a Mahalanobis distance by minimizing the leave-one-out cross validation error of the kNN classi er on a training set. Because it uses softmax activation function to convert distance to probability, the gradient computation step is expensive. Weinberger et al. [12] proposed a method that learns a matrix designed to improve the performance of kNN classi cation. The objective function is composed of two terms. The rst term minimizes the distance between target neighbours. The second term is a hinge-loss that encourages target neighbours to be at least one distance unit closer than points from other classes. It requires information about the class of each sample. As a result, their method is not applicable for the restricted setting in LFW (see section 2.1). Recently, Davis et al. [13] have taken an information theoretic approach to learn a Mahalanobis metric under a wide range of possible constraints and prior knowledge on the Mahalanobis distance. Their method regularizes the learned matrix to make it as close as possible to a known prior matrix. The closeness is measured as a Kullback-Leibler divergence between two Gaussian distributions corresponding to the two matrices. In this paper, we propose a new method named Cosine Similarity Metric Learning (CSML). There are two main contributions. The rst contribution is Cosine Similarity Metric Learning for Face Veri cation 3 that we have shown cosine similarity to be an e ective alternative to Euclidean distance in metric learning problem. The second contribution is that CSML can improve the generalization ability of an existing metric signi cantly in most cases. Our method is di erent from all the above methods in terms of distance measures. All of the other methods use Euclidean distance to measure the dissimilarities between samples in the transformed space whilst our method uses cosine similarity which leads to a simple and e ective metric learning method. The rest of this paper is structured as follows. Section 2 presents CSML method in detail. Section 3 present how CSML can be applied to face veri cation. Experimental results are presented in section 4. Finally, conclusion is given in section 5. 1 Cosine Similarity Metric Learning The general idea is to learn a transformation matrix from training data so that cosine similarity performs well in the transformed subspace. The performance is measured by cross validation error (cve). 1.1 Cosine similarity Cosine similarity (CS) between two vectors x and y is de ned as: CS(x, y) = x y ‖x‖ ‖y‖ Cosine similarity has a special property that makes it suitable for metric learning: the resulting similarity measure is always within the range of −1 and +1. As shown in section 1.3, this property allows the objective function to be simple and e ective. 1.2 Metric learning formulation Let {xi, yi, li}i=1 denote a training set of s labeled samples with pairs of input vectors xi, yi ∈ R and binary class labels li ∈ {1, 0} which indicates whether xi and yi match or not. The goal is to learn a linear transformation A : R → R(d ≤ m), which we will use to compute cosine similarities in the transformed subspace as: CS(x, y,A) = (Ax) (Ay) ‖Ax‖ ‖Ay‖ = xAAy √ xTATAx √ yTATAy Speci cally, we want to learn the linear transformation that minimizes the cross validation error when similarities are measured in this way. We begin by de ning the objective function. 4 Hieu V. Nguyen and Li Bai 1.3 Objective function First, we de ne positive and negative sample index sets Pos and Neg as:",
"title": ""
},
{
"docid": "61ad35eaee012d8c1bddcaeee082fa22",
"text": "For realistic simulation it is necessary to thoroughly define and describe light-source characteristics¿especially the light-source geometry and the luminous intensity distribution.",
"title": ""
},
{
"docid": "54663fcef476f15e2b5261766a19375b",
"text": "In this study, performances of classification techniques were compared in order to predict the presence of coronary artery disease (CAD). A retrospective analysis was performed in 1245 subjects (865 presence of CAD and 380 absence of CAD). We compared performances of logistic regression (LR), classification and regression tree (CART), multi-layer perceptron (MLP), radial basis function (RBF), and self-organizing feature maps (SOFM). Predictor variables were age, sex, family history of CAD, smoking status, diabetes mellitus, systemic hypertension, hypercholesterolemia, and body mass index (BMI). Performances of classification techniques were compared using ROC curve, Hierarchical Cluster Analysis (HCA), and Multidimensional Scaling (MDS). Areas under the ROC curves are 0.783, 0.753, 0.745, 0.721, and 0.675, respectively for MLP, LR, CART, RBF, and SOFM. MLP was found the best technique to predict presence of CAD in this data set, given its good classificatory performance. MLP, CART, LR, and RBF performed better than SOFM in predicting CAD in according to HCA and MDS. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "af9768101a634ab57eb2554953ef63ec",
"text": "Very recently, there has been a perfect storm of technical advances that has culminated in the emergence of a new interaction modality: on-body interfaces. Such systems enable the wearer to use their body as an input and output platform with interactive graphics. Projects such as PALMbit and Skinput sought to answer the initial and fundamental question: whether or not on-body interfaces were technologically possible. Although considerable technical work remains, we believe it is important to begin shifting the question away from how and what, and towards where, and ultimately why. These are the class of questions that inform the design of next generation systems. To better understand and explore this expansive space, we employed a mixed-methods research process involving more than two thousand individuals. This started with high-resolution, but low-detail crowdsourced data. We then combined this with rich, expert interviews, exploring aspects ranging from aesthetics to kinesthetics. The results of this complimentary, structured exploration, point the way towards more comfortable, efficacious, and enjoyable on-body user experiences.",
"title": ""
}
] |
scidocsrr
|
091891a7bfc530d874755bd034d1e055
|
Interpreting Visual Question Answering Models
|
[
{
"docid": "56934c400280e56dffbb27e6d06c21b9",
"text": "Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering ; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks (Weston et al., 2015) because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance .",
"title": ""
}
] |
[
{
"docid": "36f960b37e7478d8ce9d41d61195f83a",
"text": "An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives au explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, sphericat-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than sphericalinterpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamformmg and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method.",
"title": ""
},
{
"docid": "69561d0f42cf4aae73d4c97c1871739e",
"text": "Recent methods based on 3D skeleton data have achieved outstanding performance due to its conciseness, robustness, and view-independent representation. With the development of deep learning, Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM)-based learning methods have achieved promising performance for action recognition. However, for CNN-based methods, it is inevitable to loss temporal information when a sequence is encoded into images. In order to capture as much spatial-temporal information as possible, LSTM and CNN are adopted to conduct effective recognition with later score fusion. In addition, experimental results show that the score fusion between CNN and LSTM performs better than that between LSTM and LSTM for the same feature. Our method achieved state-of-the-art results on NTU RGB+D datasets for 3D human action analysis. The proposed method achieved 87.40% in terms of accuracy and ranked 1st place in Large Scale 3D Human Activity Analysis Challenge in Depth Videos.",
"title": ""
},
{
"docid": "c664918193470b20af2ce2ecf0c8e1c7",
"text": "The exceptional electronic properties of graphene, with its charge carriers mimicking relativistic quantum particles and its formidable potential in various applications, have ensured a rapid growth of interest in this new material. We report on electron transport in quantum dot devices carved entirely from graphene. At large sizes (>100 nanometers), they behave as conventional single-electron transistors, exhibiting periodic Coulomb blockade peaks. For quantum dots smaller than 100 nanometers, the peaks become strongly nonperiodic, indicating a major contribution of quantum confinement. Random peak spacing and its statistics are well described by the theory of chaotic neutrino billiards. Short constrictions of only a few nanometers in width remain conductive and reveal a confinement gap of up to 0.5 electron volt, demonstrating the possibility of molecular-scale electronics based on graphene.",
"title": ""
},
{
"docid": "dbda7c4876586773889b7579829a02cb",
"text": "There has been an increase in the amount of multilingual text on the Internet due to the proliferation of news sources and blogs. The Urdu language, in particular, has experienced explosive growth on the Web. Text mining for information discovery, which includes tasks such as identifying topics, relationships and events, and sentiment analysis, requires sophisticated natural language processing (NLP). NLP systems begin with modules such as word segmentation, part-of-speech tagging, and morphological analysis and progress to modules such as shallow parsing and named entity tagging. While there have been considerable advances in developing such comprehensive NLP systems for English, the work for Urdu is still in its infancy. The tasks of interest in Urdu NLP includes analyzing data sources such as blogs and comments to news articles to provide insight into social and human behavior. All of this requires a robust NLP system. The objective of this work is to develop an NLP infrastructure for Urdu that is customizable and capable of providing basic analysis on which more advanced information extraction tools can be built. This system assimilates resources from various online sources to facilitate improved named entity tagging and Urdu-to-English transliteration. The annotated data required to train the learning models used here is acquired by standardizing the currently limited resources available for Urdu. Techniques such as bootstrap learning and resource sharing from a syntactically similar language, Hindi, are explored to augment the available annotated Urdu data. Each of the new Urdu text processing modules has been integrated into a general text-mining platform. The evaluations performed demonstrate that the accuracies have either met or exceeded the state of the art.",
"title": ""
},
{
"docid": "0af3e6e48d3745b7ea52aae25c26fe10",
"text": "MOEA/D is a recently proposed methodology of Multiobjective Evolution Algorithms that decomposes multiobjective problems into a number of scalar subproblems and optimizes them simultaneously. However, classical MOEA/D uses same weight vectors for different shapes of Pareto front. We propose a novel method called Pareto-adaptive weight vectors (paλ) to automatically adjust the weight vectors by the geometrical characteristics of Pareto front. Evaluation on different multiobjective problems confirms that the new algorithm obtains higher hypervolume, better convergence and more evenly distributed solutions than classical MOEA/D and NSGA-II.",
"title": ""
},
{
"docid": "48be442dfe31fbbbefb6fbf0833112fb",
"text": "When documents and queries are presented in different languages, the common approach is to translate the query into the document language. While there are a variety of query translation approaches, recent research suggests that combining multiple methods into a single ”structured query” is the most effective. In this paper, we introduce a novel approach for producing a unique combination recipe for each query, as it has also been shown that the optimal combination weights differ substantially across queries and other task specifics. Our query-specific combination method generates statistically significant improvements over other combination strategies presented in the literature, such as uniform and task-specific weighting. An in-depth empirical analysis presents insights about the effect of data size, domain differences, labeling and tuning on the end performance of our approach.",
"title": ""
},
{
"docid": "63405a3fc4815e869fc872bb96bb8a33",
"text": "We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We consider search algorithms for quantified Boolean logics, that already can solve formulas of impressive size up to 100s of thousands of variables. The main challenge is to find a representation which lends to making predictions in a scalable way. The heuristics learned through our approach significantly improve over the handwritten heuristics for several sets of formulas.",
"title": ""
},
{
"docid": "4b8f4a8b6c303ea0a20c48840f677ea7",
"text": "PURPOSE/OBJECTIVES\nTo identify subgroups of outpatients with cancer based on their experiences with the symptoms of fatigue, sleep disturbance, depression, and pain; to explore whether patients in the subgroups differed on selected demographic, disease, and treatment characteristics; and to determine whether patients in the subgroups differed on two important patient outcomes: functional status and quality of life (QOL).\n\n\nDESIGN\nDescriptive, correlational study.\n\n\nSETTING\nFour outpatient oncology practices in northern California.\n\n\nSAMPLE\n191 outpatients with cancer receiving active treatment.\n\n\nMETHODS\nPatients completed a demographic questionnaire, Karnofsky Performance Status scale, Lee Fatigue Scale, General Sleep Disturbance Scale, Center for Epidemiological Studies Depression Scale, Multidimensional Quality-of-Life Scale Cancer, and a numeric rating scale of worst pain intensity. Medical records were reviewed for disease and treatment information. Cluster analysis was used to identify patient subgroups based on patients symptom experiences. Differences in demographic, disease, and treatment characteristics as well as in outcomes were evaluated using analysis of variance and chi square analysis.\n\n\nMAIN RESEARCH VARIABLES\nSubgroup membership, fatigue, sleep disturbance, depression, pain, functional status, and QOL.\n\n\nFINDINGS\nFour relatively distinct patient subgroups were identified based on patients experiences with four highly prevalent and related symptoms.\n\n\nCONCLUSIONS\nThe subgroup of patients who reported low levels of all four symptoms reported the best functional status and QOL.\n\n\nIMPLICATIONS FOR NURSING\nThe findings from this study need to be replicated before definitive clinical practice recommendations can be made. Until that time, clinicians need to assess patients for the occurrence of multiple symptoms that may place them at increased risk for poorer outcomes.",
"title": ""
},
{
"docid": "388101f40ff79f2543b111aad96c4180",
"text": "Based on available literature, ecology and economy of light emitting diode (LED) lights in plant foods production were assessed and compared to high pressure sodium (HPS) and compact fluorescent light (CFL) lamps. The assessment summarises that LEDs are superior compared to other lamp types. LEDs are ideal in luminous efficiency, life span and electricity usage. Mercury, carbon dioxide and heat emissions are also lowest in comparison to HPS and CFL lamps. This indicates that LEDs are indeed economic and eco-friendly lighting devices. The present review indicates also that LEDs have many practical benefits compared to other lamp types. In addition, they are applicable in many purposes in plant foods production. The main focus of the review is the targeted use of LEDs in order to enrich phytochemicals in plants. This is an expedient to massive improvement in production efficiency, since it diminishes the number of plants per phytochemical unit. Consequently, any other production costs (e.g. growing space, water, nutrient and transport) may be reduced markedly. Finally, 24 research articles published between 2013 and 2017 were reviewed for targeted use of LEDs in the specific, i.e. blue range (400-500 nm) of spectrum. The articles indicate that blue light is efficient in enhancing the accumulation of health beneficial phytochemicals in various species. The finding is important for global food production. © 2017 Society of Chemical Industry.",
"title": ""
},
{
"docid": "231d7797961326974ca3a3d2271810ae",
"text": "Agile methods form an alternative to waterfall methodologies. Little is known about activity composition, the proportion of varying activities in agile processes and the extent to which the proportions of activities differ from \"waterfall\" processes. In the current study, we examine the variation in per formative routines in one large agile and traditional lifecycle project using an event sequencing method. Our analysis shows that the enactment of waterfall and agile routines differ significantly suggesting that agile process is composed of fewer activities which are repeated iteratively1.",
"title": ""
},
{
"docid": "c61107e9c5213ddb8c5e3b1b14dca661",
"text": "In advanced driving assistance systems, it is important to be able to detect the region covered by the road in the images. This paper proposes a method for estimating the road region in images captured by a vehicle-mounted monocular camera. Our proposed method first estimates all of relevant parameters for the camera motion and the 3D road plane from correspondence points between successive images. By calculating a homography matrix from the estimated camera motion and the estimated road plane parameters, and then warping the image at the previous frame, the road region can be determined. To achieve robustness in various road scenes, our method selects the threshold for determining the road region adaptively and incorporates the assumption of a simple road region boundary. In our experiments, it has been shown that the proposed method is able to estimate the road region in real road environments.",
"title": ""
},
{
"docid": "a861082476893281800441c46e71d652",
"text": "Current debates on design research, and its relation to other research fields and scientific disciplines, refer back to a fundamental distinction introduced by Herb Simon (Simon, 1996 (1981)): Design and design research do not primarily focus on explaining the world as it is; they share with engineering a fundamental interest in focusing on the world as it could be. In parallel, we observe a growing interest in the science studies to interpret scientific research as a constructive and creative practice (Knorr Cetina, 1999; 2002), organized as experimental systems (Rheinberger, 2001). Design fiction is a new approach, which integrates these two perspectives, in order to develop a method toolbox for design research for a complex world (Bleecker, 2009; Wiedmer & Caviezel, 2009; Grand 2010).",
"title": ""
},
{
"docid": "fc3b45bf5fa1843d7284103e37326b71",
"text": "The N-terminal cleavage product of human insulin-like growth factor-1 (IGF-1) in the brain is the tripeptide molecule Glypromate (Gly-Pro-Glu). Glypromate has demonstrated neuroprotective effects in numerous in vitro and in vivo models of brain injury and is in clinical trials for the prevention of cognitive impairment following cardiac surgery. NNZ-2566 is a structural analogue of Glypromate, resulting from alpha-methylation of the proline moiety, which has improved the elimination half-life and oral bioavailability over the parent peptide. In vivo, NNZ-2566 reduces injury size in rats subjected to focal stroke. An intravenous infusion of NNZ-2566 of 4 h duration (3-10 mg/kg/h), initiated 3 h after endothelin-induced middle-cerebral artery constriction, significantly reduced infarct area as assessed on day 5. Neuroprotective efficacy in the MCAO model was also observed following oral administration of the drug (30-60 mg/kg), when formulated as a microemulsion. In vitro, NNZ-2566 significantly attenuates apoptotic cell death in primary striatal cultures, suggesting attenuation of apoptosis is one mechanism of action underlying its neuroprotective effects. NNZ-2566 is currently in clinical trials for the treatment of cognitive deficits following traumatic brain injury, and these data further support the development of the drug as a neuroprotective agent for acute brain injury.",
"title": ""
},
{
"docid": "26f2e3918eb624ce346673d10b5d2eb7",
"text": "We consider generation and comprehension of natural language referring expression for objects in an image. Unlike generic image captioning which lacks natural standard evaluation criteria, quality of a referring expression may be measured by the receivers ability to correctly infer which object is being described. Following this intuition, we propose two approaches to utilize models trained for comprehension task to generate better expressions. First, we use a comprehension module trained on human-generated expressions, as a critic of referring expression generator. The comprehension module serves as a differentiable proxy of human evaluation, providing training signal to the generation module. Second, we use the comprehension model in a generate-and-rerank pipeline, which chooses from candidate expressions generated by a model according to their performance on the comprehension task. We show that both approaches lead to improved referring expression generation on multiple benchmark datasets.",
"title": ""
},
{
"docid": "f162ca10328e222567d33ac4920a2c94",
"text": "We live in a digital world where every detail of our information is being transferred from one smart device to another via cross-platform, third-party cloud services. Smart technologies, such as, smartphones are playing dynamic roles in order to successfully complete our daily routines and official tasks that require access to all types of critical data. Before the advent of these smart technologies, securing critical information was quite a challenge. However, after the advent and global adoption of such technologies, information security has become one of the primary and most fundamental task for security professionals. The integration of social media has made this task even more challenging to undertake successfully. To this day, there are plentiful studies in which numerous authentication and security techniques were proposed and developed for smartphone and cloud computing technologies. These studies have successfully addressed multiple authentication threats and other related issues in existing the smartphone and cloud computing technologies. However, to the best of our understanding and knowledge, these studies lack many aspects in terms of authentication attacks, logical authentication analysis, and the absence of authentication implementation scenarios. Due to these authentication anomalies and ambiguities, such studies cannot be fully considered for successful implementation. Therefore, in this paper, we have performed a comprehensive security analysis and review of various smartphone and cloud computing authentication frameworks and protocols to outline up-to-date authentication threats and issues in the literature. These authentication challenges are further summarized and presented in the form of different graphs to illustrate where the research is currently heading. Finally, based on those outcomes, we identify the latest and existing authentication uncertainties, threats, and other related issues to address future directions and open research issues in the domain of the smartphone-and cloud-computing authentication.",
"title": ""
},
{
"docid": "0b22d7708437c47d5e83ea9fc5f24406",
"text": "The American Association for Respiratory Care has declared a benchmark for competency in mechanical ventilation that includes the ability to \"apply to practice all ventilation modes currently available on all invasive and noninvasive mechanical ventilators.\" This level of competency presupposes the ability to identify, classify, compare, and contrast all modes of ventilation. Unfortunately, current educational paradigms do not supply the tools to achieve such goals. To fill this gap, we expand and refine a previously described taxonomy for classifying modes of ventilation and explain how it can be understood in terms of 10 fundamental constructs of ventilator technology: (1) defining a breath, (2) defining an assisted breath, (3) specifying the means of assisting breaths based on control variables specified by the equation of motion, (4) classifying breaths in terms of how inspiration is started and stopped, (5) identifying ventilator-initiated versus patient-initiated start and stop events, (6) defining spontaneous and mandatory breaths, (7) defining breath sequences (8), combining control variables and breath sequences into ventilatory patterns, (9) describing targeting schemes, and (10) constructing a formal taxonomy for modes of ventilation composed of control variable, breath sequence, and targeting schemes. Having established the theoretical basis of the taxonomy, we demonstrate a step-by-step procedure to classify any mode on any mechanical ventilator.",
"title": ""
},
{
"docid": "b18c8b7472ba03a260d63b886a6dc11d",
"text": "In this paper, we propose a novel technique for automatic table detection in document images. Lines and tables are among the most frequent graphic, non-textual entities in documents and their detection is directly related to the OCR performance as well as to the document layout description. We propose a workflow for table detection that comprises three distinct steps: (i) image pre-processing; (ii) horizontal and vertical line detection and (iii) table detection. The efficiency of the proposed method is demonstrated by using a performance evaluation scheme which considers a great variety of documents such as forms, newspapers/magazines, scientific journals, tickets/bank cheques, certificates and handwritten documents.",
"title": ""
},
{
"docid": "b261534c045299c1c3a0e0cc37caa618",
"text": "Michelangelo (1475-1564) had a life-long interest in anatomy that began with his participation in public dissections in his early teens, when he joined the court of Lorenzo de' Medici and was exposed to its physician-philosopher members. By the age of 18, he began to perform his own dissections. His early anatomic interests were revived later in life when he aspired to publish a book on anatomy for artists and to collaborate in the illustration of a medical anatomy text that was being prepared by the Paduan anatomist Realdo Colombo (1516-1559). His relationship with Colombo likely began when Colombo diagnosed and treated him for nephrolithiasis in 1549. He seems to have developed gouty arthritis in 1555, making the possibility of uric acid stones a distinct probability. Recurrent urinary stones until the end of his life are well documented in his correspondence, and available documents imply that he may have suffered from nephrolithiasis earlier in life. His terminal illness with symptoms of fluid overload suggests that he may have sustained obstructive nephropathy. That this may account for his interest in kidney function is evident in his poetry and drawings. Most impressive in this regard is the mantle of the Creator in his painting of the Separation of Land and Water in the Sistine Ceiling, which is in the shape of a bisected right kidney. His use of the renal outline in a scene representing the separation of solids (Land) from liquid (Water) suggests that Michelangelo was likely familiar with the anatomy and function of the kidney as it was understood at the time.",
"title": ""
},
{
"docid": "851a966bbfee843e5ae1eaf21482ef87",
"text": "The Pittsburgh Sleep Quality Index (PSQI) is a widely used measure of sleep quality in adolescents, but information regarding its psychometric strengths and weaknesses in this population is limited. In particular, questions remain regarding whether it measures one or two sleep quality domains. The aims of the present study were to (a) adapt the PSQI for use in adolescents and young adults, and (b) evaluate the psychometric properties of the adapted measure in this population. The PSQI was slightly modified to make it more appropriate for use in youth populations and was translated into Spanish for administration to the sample population available to the study investigators. It was then administered with validity criterion measures to a community-based sample of Spanish adolescents and young adults (AYA) between 14 and 24 years old (N = 216). The results indicated that the questionnaire (AYA-PSQI-S) assesses a single factor. The total score evidenced good convergent and divergent validity and moderate reliability (Cronbach's alpha = .72). The AYA-PSQI-S demonstrates adequate psychometric properties for use in clinical trials involving adolescents and young adults. Additional research to further evaluate the reliability and validity of the measure for use in clinical settings is warranted.",
"title": ""
}
] |
scidocsrr
|
1d1c6ddf68518be598efebfa4b7c63ca
|
A Learning-based Framework for Hybrid Depth-from-Defocus and Stereo Matching
|
[
{
"docid": "4421a42fc5589a9b91215b68e1575a3f",
"text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"title": ""
}
] |
[
{
"docid": "f267f44fe9463ac0114335959f9739fa",
"text": "HTTP Adaptive Streaming (HAS) is today the number one video technology for over-the-top video distribution. In HAS, video content is temporally divided into multiple segments and encoded at different quality levels. A client selects and retrieves per segment the most suited quality version to create a seamless playout. Despite the ability of HAS to deal with changing network conditions, HAS-based live streaming often suffers from freezes in the playout due to buffer under-run, low average quality, large camera-to-display delay, and large initial/channel-change delay. Recently, IETF has standardized HTTP/2, a new version of the HTTP protocol that provides new features for reducing the page load time in Web browsing. In this paper, we present ten novel HTTP/2-based methods to improve the quality of experience of HAS. Our main contribution is the design and evaluation of a push-based approach for live streaming in which super-short segments are pushed from server to client as soon as they become available. We show that with an RTT of 300 ms, this approach can reduce the average server-to-display delay by 90.1% and the average start-up delay by 40.1%.",
"title": ""
},
{
"docid": "81b24cc33a54dcd6ca4af6264ad24a9a",
"text": "In many envisioned drone-based applications, drones will communicate with many different smart objects, such as sensors and embedded devices. Securing such communications requires an effective and efficient encryption key establishment protocol. However, the design of such a protocol must take into account constrained resources of smart objects and the mobility of drones. In this paper, a secure communication protocol between drones and smart objects is presented. To support the required security functions, such as authenticated key agreement, non-repudiation, and user revocation, we propose an efficient Certificateless Signcryption Tag Key Encapsulation Mechanism (eCLSC-TKEM). eCLSC-TKEM reduces the time required to establish a shared key between a drone and a smart object by minimizing the computational overhead at the smart object. Also, our protocol improves drone's efficiency by utilizing dual channels which allows many smart objects to concurrently execute eCLSC-TKEM. We evaluate our protocol on commercially available devices, namely AR.Drone2.0 and TelosB, by using a parking management testbed. Our experimental results show that our protocol is much more efficient than other protocols.",
"title": ""
},
{
"docid": "b4c8a34f9bda4b232d73ee7eafb30f88",
"text": "Bargaining with reading habit is no need. Reading is not kind of something sold that you can take or not. It is a thing that will change your life to life better. It is the thing that will give you many things around the world and this universe, in the real world and here after. As what will be given by this artificial intelligence a new synthesis, how can you bargain with the thing that has many benefits for you?",
"title": ""
},
{
"docid": "4a2fcdf5394e220a579d1414588a124a",
"text": "In this paper we introduce AR Scratch, the first augmented-reality (AR) authoring environment designed for children. By adding augmented-reality functionality to the Scratch programming platform, this environment allows pre-teens to create programs that mix real and virtual spaces. Children can display virtual objects on a real-world space seen through a camera, and they can control the virtual world through interactions between physical objects. This paper describes the system design process, which focused on appropriately presenting the AR technology to the typical Scratch population (children aged 8-12), as influenced by knowledge of child spatial cognition, programming expertise, and interaction metaphors. Evaluation of this environment is proposed, accompanied by results from an initial pilot study, as well as discussion of foreseeable impacts on the Scratch user community.",
"title": ""
},
{
"docid": "65e273d046a8120532d8cd04bcadca56",
"text": "This paper explores the relationship between domain scheduling in avirtual machine monitor (VMM) and I/O performance. Traditionally, VMM schedulers have focused on fairly sharing the processor resources among domains while leaving the scheduling of I/O resources as asecondary concern. However, this can resultin poor and/or unpredictable application performance, making virtualization less desirable for applications that require efficient and consistent I/O behavior.\n This paper is the first to study the impact of the VMM scheduler on performance using multiple guest domains concurrently running different types of applications. In particular, different combinations of processor-intensive, bandwidth-intensive, andlatency-sensitive applications are run concurrently to quantify the impacts of different scheduler configurations on processor and I/O performance. These applications are evaluated on 11 different scheduler configurations within the Xen VMM. These configurations include a variety of scheduler extensions aimed at improving I/O performance. This cross product of scheduler configurations and application types offers insight into the key problems in VMM scheduling for I/O and motivates future innovation in this area.",
"title": ""
},
{
"docid": "6cd4ed54497e30aba681b1e2bc79d29c",
"text": "Industrial systems consider only partially security, mostly relying on the basis of “isolated” networks, and controlled access environments. Monitoring and control systems such as SCADA/DCS are responsible for managing critical infrastructures operate in these environments, where a false sense of security assumptions is usually made. The Stuxnet worm attack demonstrated widely in mid 2010 that many of the security assumptions made about the operating environment, technological capabilities and potential threat risk analysis are far away from the reality and challenges modern industrial systems face. We investigate in this work the highly sophisticated aspects of Stuxnet, the impact that it may have on existing security considerations and pose some thoughts on the next generation SCADA/DCS systems from a security perspective.",
"title": ""
},
{
"docid": "47a87a903c4a8ef650fdbf670fca8568",
"text": "Social networks are a popular movement on the web. On the Semantic Web, it is simple to make trust annotations to social relationships. In this paper, we present a two level approach to integrating trust, provenance, and annotations in Semantic Web systems. We describe an algorithm for inferring trust relationships using provenance information and trust annotations in Semantic Web-based social networks. Then, we present an application, FilmTrust, that combines the computed trust values with the provenance of other annotations to personalize the website. The FilmTrust system uses trust to compute personalized recommended movie ratings and to order reviews. We believe that the results obtained with FilmTrust illustrate the success that can be achieved using this method of combining trust and provenance on the Semantic Web.",
"title": ""
},
{
"docid": "21042ce5670109dd548e43ca46cacbfd",
"text": "The CRISPR/Cas adaptive immune system provides resistance against phages and plasmids in Archaea and Bacteria. CRISPR loci integrate short DNA sequences from invading genetic elements that provide small RNA-mediated interference in subsequent exposure to matching nucleic acids. In Streptococcus thermophilus, it was previously shown that the CRISPR1/Cas system can provide adaptive immunity against phages and plasmids by integrating novel spacers following exposure to these foreign genetic elements that subsequently direct the specific cleavage of invasive homologous DNA sequences. Here, we show that the S. thermophilus CRISPR3/Cas system can be transferred into Escherichia coli and provide heterologous protection against plasmid transformation and phage infection. We show that interference is sequence-specific, and that mutations in the vicinity or within the proto-spacer adjacent motif (PAM) allow plasmids to escape CRISPR-encoded immunity. We also establish that cas9 is the sole cas gene necessary for CRISPR-encoded interference. Furthermore, mutation analysis revealed that interference relies on the Cas9 McrA/HNH- and RuvC/RNaseH-motifs. Altogether, our results show that active CRISPR/Cas systems can be transferred across distant genera and provide heterologous interference against invasive nucleic acids. This can be leveraged to develop strains more robust against phage attack, and safer organisms less likely to uptake and disseminate plasmid-encoded undesirable genetic elements.",
"title": ""
},
{
"docid": "6eb1730f03265d09db40bbda8c71c2cd",
"text": "In this opinion piece, we argue that there is a need for alternative design directions to complement existing AI efforts in narrative and character generation and algorithm development. To make our argument, we a) outline the predominant roles and goals of AI research in storytelling; b) present existing discourse on the benefits and harms of narratives; and c) highlight the pain points in character creation revealed by semi-structured interviews we conducted with 14 individuals deeply involved in some form of character creation. We conclude by proffering several specific design avenues that we believe can seed fruitful research collaborations. In our vision, AI collaborates with humans during creative processes and narrative generation, helps amplify voices and perspectives that are currently marginalized or misrepresented, and engenders experiences of narrative that support spectatorship and listening roles.",
"title": ""
},
{
"docid": "56444dce712e313c0c014a260f97a6b3",
"text": "Ecology and historical (phylogeny-based) biogeography have much to offer one another, but exchanges between these fields have been limited. Historical biogeography has become narrowly focused on using phylogenies to discover the history of geological connections among regions. Conversely, ecologists often ignore historical biogeography, even when its input can be crucial. Both historical biogeographers and ecologists have more-or-less abandoned attempts to understand the processes that determine the large-scale distribution of clades. Here, we describe the chasm that has developed between ecology and historical biogeography, some of the important questions that have fallen into it and how it might be bridged. To illustrate the benefits of an integrated approach, we expand on a model that can help explain the latitudinal gradient of species richness.",
"title": ""
},
{
"docid": "7917c6d9a9d495190e5b7036db92d46d",
"text": "Background A precise understanding of the anatomical structures of the heart and great vessels is essential for surgical planning in order to avoid unexpected findings. Rapid prototyping techniques are used to print three-dimensional (3D) replicas of patients’ cardiovascular anatomy based on 3D clinical images such as MRI. The purpose of this study is to explore the use of 3D patient-specific cardiovascular models using rapid prototyping techniques to improve surgical planning in patients with complex congenital heart disease.",
"title": ""
},
{
"docid": "9eee385499bfb25dd728dde7dfdc7951",
"text": "Unsupervised video segmentation plays an important role in a wide variety of applications from object identification to compression. However, to date, fast motion, motion blur and occlusions pose significant challenges. To address these challenges for unsupervised video segmentation, we develop a novel saliency estimation technique as well as a novel neighborhood graph, based on optical flow and edge cues. Our approach leads to significantly better initial foreground-background estimates and their robust as well as accurate diffusion across time. We evaluate our proposed algorithm on the challenging DAVIS, SegTrack v2 and FBMS-59 datasets. Despite the usage of only a standard edge detector trained on 200 images, our method achieves state-of-the-art results outperforming deep learning based methods in the unsupervised setting. We even demonstrate competitive results comparable to deep learning based methods in the semi-supervised setting on the DAVIS dataset.",
"title": ""
},
{
"docid": "cf54533bc317b960fc80f22baa26d7b1",
"text": "The state-of-the-art named entity recognition (NER) systems are statistical machine learning models that have strong generalization capability (i.e., can recognize unseen entities that do not appear in training data) based on lexical and contextual information. However, such a model could still make mistakes if its features favor a wrong entity type. In this paper, we utilize Wikipedia as an open knowledge base to improve multilingual NER systems. Central to our approach is the construction of high-accuracy, highcoverage multilingual Wikipedia entity type mappings. These mappings are built from weakly annotated data and can be extended to new languages with no human annotation or language-dependent knowledge involved. Based on these mappings, we develop several approaches to improve an NER system. We evaluate the performance of the approaches via experiments on NER systems trained for 6 languages. Experimental results show that the proposed approaches are effective in improving the accuracy of such systems on unseen entities, especially when a system is applied to a new domain or it is trained with little training data (up to 18.3 F1 score improvement).",
"title": ""
},
{
"docid": "62166980f94bba5e75c9c6ad4a4348f1",
"text": "In this paper the design and the implementation of a linear, non-uniform antenna array for a 77-GHz MIMO FMCW system that allows for the estimation of both the distance and the angular position of a target are presented. The goal is to achieve a good trade-off between the main beam width and the side lobe level. The non-uniform spacing in addition with the MIMO principle offers a superior performance compared to a classical uniform half-wavelength antenna array with an equal number of elements. However the design becomes more complicated and can not be tackled using analytical methods. Starting with elementary array factor considerations the design is approached using brute force, stepwise brute force, and particle swarm optimization. The particle swarm optimized array was also implemented. Simulation results and measurements are presented and discussed.",
"title": ""
},
{
"docid": "30d0ff3258decd5766d121bf97ae06d4",
"text": "In this paper, we present a new image forgery detection method based on deep learning technique, which utilizes a convolutional neural network (CNN) to automatically learn hierarchical representations from the input RGB color images. The proposed CNN is specifically designed for image splicing and copy-move detection applications. Rather than a random strategy, the weights at the first layer of our network are initialized with the basic high-pass filter set used in calculation of residual maps in spatial rich model (SRM), which serves as a regularizer to efficiently suppress the effect of image contents and capture the subtle artifacts introduced by the tampering operations. The pre-trained CNN is used as patch descriptor to extract dense features from the test images, and a feature fusion technique is then explored to obtain the final discriminative features for SVM classification. The experimental results on several public datasets show that the proposed CNN based model outperforms some state-of-the-art methods.",
"title": ""
},
{
"docid": "ab572c22a75656c19e50b311eb4985ec",
"text": "With the increasingly complex electromagnetic environment of communication, as well as the gradually increased radar signal types, how to effectively identify the types of radar signals at low SNR becomes a hot topic. A radar signal recognition algorithm based on entropy features, which describes the distribution characteristics for different types of radar signals by extracting Shannon entropy, Singular spectrum Shannon entropy and Singular spectrum index entropy features, was proposed to achieve the purpose of signal identification. Simulation results show that, the algorithm based on entropies has good anti-noise performance, and it can still describe the characteristics of signals well even at low SNR, which can achieve the purpose of identification and classification for different radar signals.",
"title": ""
},
{
"docid": "8a92594dbd75885002bad0dc2e658e10",
"text": "Exposure to some music, in particular classical music, has been reported to produce transient increases in cognitive performance. The authors investigated the effect of listening to an excerpt of Vivaldi's Four Seasons on category fluency in healthy older adult controls and Alzheimer's disease patients. In a counterbalanced repeated-measure design, participants completed two, 1-min category fluency tasks whilst listening to an excerpt of Vivaldi and two, 1-min category fluency tasks without music. The authors report a positive effect of music on category fluency, with performance in the music condition exceeding performance without music in both the healthy older adult control participants and the Alzheimer's disease patients. In keeping with previous reports, the authors conclude that music enhances attentional processes, and that this can be demonstrated in Alzheimer's disease.",
"title": ""
},
{
"docid": "a8785a543b30082141df4956838b956a",
"text": "Covalent organic frameworks (COFs) are newly emerged crystalline porous polymers with well-defined skeletons and nanopores mainly consisted of light-weight elements (H, B, C, N and O) linked by dynamic covalent bonds. Compared with conventional materials, COFs possess some unique and attractive features, such as large surface area, pre-designable pore geometry, excellent crystallinity, inherent adaptability and high flexibility in structural and functional design, thus exhibiting great potential for various applications. Especially, their large surface area and tunable porosity and π conjugation with unique photoelectric properties will enable COFs to serve as a promising platform for drug delivery, bioimaging, biosensing and theranostic applications. In this review, we trace the evolution of COFs in terms of linkages and highlight the important issues on synthetic method, structural design, morphological control and functionalization. And then we summarize the recent advances of COFs in the biomedical and pharmaceutical sectors and conclude with a discussion of the challenges and opportunities of COFs for biomedical purposes. Although currently still at its infancy stage, COFs as an innovative source have paved a new way to meet future challenges in human healthcare and disease theranostic.",
"title": ""
},
{
"docid": "80563d90bfdccd97d9da0f7276468a43",
"text": "An essential aspect of knowing language is knowing the words of that language. This knowledge is usually thought to reside in the mental lexicon, a kind of dictionary that contains information regarding a word's meaning, pronunciation, syntactic characteristics, and so on. In this article, a very different view is presented. In this view, words are understood as stimuli that operate directly on mental states. The phonological, syntactic and semantic properties of a word are revealed by the effects it has on those states.",
"title": ""
},
{
"docid": "8869e69647a16278d7a2ac26316ec5d0",
"text": "Despite significant progress, most existing visual dictionary learning methods rely on image descriptors alone or together with class labels. However, Web images are often associated with text data which may carry substantial information regarding image semantics, and may be exploited for visual dictionary learning. This paper explores this idea by leveraging relational information between image descriptors and textual words via co-clustering, in addition to information of image descriptors. Existing co-clustering methods are not optimal for this problem because they ignore the structure of image descriptors in the continuous space, which is crucial for capturing visual characteristics of images. We propose a novel Bayesian co-clustering model to jointly estimate the underlying distributions of the continuous image descriptors as well as the relationship between such distributions and the textual words through a unified Bayesian inference. Extensive experiments on image categorization and retrieval have validated the substantial value of the proposed joint modeling in improving visual dictionary learning, where our model shows superior performance over several recent methods.",
"title": ""
}
] |
scidocsrr
|
9e5015fdd74d1cc798e4ddae4dd3f0e1
|
RRA: Recurrent Residual Attention for Sequence Learning
|
[
{
"docid": "dadd12e17ce1772f48eaae29453bc610",
"text": "Publications Learning Word Vectors for Sentiment Analysis. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. The 49 th Annual Meeting of the Association for Computational Linguistics (ACL 2011). Spectral Chinese Restaurant Processes: Nonparametric Clustering Based on Similarities. Richard Socher, Andrew Maas, and Christopher D. Manning. The 15 th International Conference on Artificial Intelligence and Statistics (AISTATS 2010). A Probabilistic Model for Semantic Word Vectors. Andrew L. Maas and Andrew Y. Ng. NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. One-Shot Learning with Bayesian Networks. Andrew L. Maas and Charles Kemp. Proceedings of the 31 st",
"title": ""
}
] |
[
{
"docid": "97e4facde730c97a080ed160682f5dd0",
"text": "The application of deep learning to symbolic domains remains an active research endeavour. Graph neural networks (GNN), consisting of trained neural modules which can be arranged in different topologies at run time, are sound alternatives to tackle relational problems which lend themselves to graph representations. In this paper, we show that GNNs are capable of multitask learning, which can be naturally enforced by training the model to refine a single set of multidimensional embeddings ∈ R and decode them into multiple outputs by connecting MLPs at the end of the pipeline. We demonstrate the multitask learning capability of the model in the relevant relational problem of estimating network centrality measures, i.e. is vertex v1 more central than vertex v2 given centrality c?. We then show that a GNN can be trained to develop a lingua franca of vertex embeddings from which all relevant information about any of the trained centrality measures can be decoded. The proposed model achieves 89% accuracy on a test dataset of random instances with up to 128 vertices and is shown to generalise to larger problem sizes. The model is also shown to obtain reasonable accuracy on a dataset of real world instances with up to 4k vertices, vastly surpassing the sizes of the largest instances with which the model was trained (n = 128). Finally, we believe that our contributions attest to the potential of GNNs in symbolic domains in general and in relational learning in particular.",
"title": ""
},
{
"docid": "c6d3f20e9d535faab83fb34cec0fdb5b",
"text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1",
"title": ""
},
{
"docid": "2c15bef67e6bdbfaf66e1164f8dddf52",
"text": "Social behavior is ordinarily treated as being under conscious (if not always thoughtful) control. However, considerable evidence now supports the view that social behavior often operates in an implicit or unconscious fashion. The identifying feature of implicit cognition is that past experience influences judgment in a fashion not introspectively known by the actor. The present conclusion--that attitudes, self-esteem, and stereotypes have important implicit modes of operation--extends both the construct validity and predictive usefulness of these major theoretical constructs of social psychology. Methodologically, this review calls for increased use of indirect measures--which are imperative in studies of implicit cognition. The theorized ordinariness of implicit stereotyping is consistent with recent findings of discrimination by people who explicitly disavow prejudice. The finding that implicit cognitive effects are often reduced by focusing judges' attention on their judgment task provides a basis for evaluating applications (such as affirmative action) aimed at reducing such unintended discrimination.",
"title": ""
},
{
"docid": "0ea6d4a02a4013a0f9d5aa7d27b5a674",
"text": "Recently, there has been growing interest in social network analysis. Graph models for social network analysis are usually assumed to be a deterministic graph with fixed weights for its edges or nodes. As activities of users in online social networks are changed with time, however, this assumption is too restrictive because of uncertainty, unpredictability and the time-varying nature of such real networks. The existing network measures and network sampling algorithms for complex social networks are designed basically for deterministic binary graphs with fixed weights. This results in loss of much of the information about the behavior of the network contained in its time-varying edge weights of network, such that is not an appropriate measure or sample for unveiling the important natural properties of the original network embedded in the varying edge weights. In this paper, we suggest that using stochastic graphs, in which weights associated with the edges are random variables, can be a suitable model for complex social network. Once the network model is chosen to be stochastic graphs, every aspect of the network such as path, clique, spanning tree, network measures and sampling algorithms should be treated stochastically. In particular, the network measures should be reformulated and new network sampling algorithms must be designed to reflect the stochastic nature of the network. In this paper, we first define some network measures for stochastic graphs, and then we propose four sampling algorithms based on learning automata for stochastic graphs. In order to study the performance of the proposed sampling algorithms, several experiments are conducted on real and synthetic stochastic graphs. The performances of these algorithms are studied in terms of Kolmogorov-Smirnov D statistics, relative error, Kendall’s rank correlation coefficient and relative cost.",
"title": ""
},
{
"docid": "746895b98974415f71912ed5dcd6ed61",
"text": "In the present study, Hu-Mikβ1, a humanized mAb directed at the shared IL-2/IL-15Rβ subunit (CD122) was evaluated in patients with T-cell large granular lymphocytic (T-LGL) leukemia. Hu-Mikβ1 blocked the trans presentation of IL-15 to T cells expressing IL-2/IL-15Rβ and the common γ-chain (CD132), but did not block IL-15 action in cells that expressed the heterotrimeric IL-15 receptor in cis. There was no significant toxicity associated with Hu-Mikβ1 administration in patients with T-LGL leukemia, but no major clinical responses were observed. One patient who had previously received murine Mikβ1 developed a measurable Ab response to the infused Ab. Nevertheless, the safety profile of this first in-human study of the humanized mAb to IL-2/IL-15Rβ (CD122) supports its evaluation in disorders such as refractory celiac disease, in which IL-15 and its receptor have been proposed to play a critical role in the pathogenesis and maintenance of disease activity.",
"title": ""
},
{
"docid": "215bb5273dbf5c301ae4170b5da39a34",
"text": "We describe a simple but effective method for cross-lingual syntactic transfer of dependency parsers, in the scenario where a large amount of translation data is not available. This method makes use of three steps: 1) a method for deriving cross-lingual word clusters, which can then be used in a multilingual parser; 2) a method for transferring lexical information from a target language to source language treebanks; 3) a method for integrating these steps with the density-driven annotation projection method of Rasooli and Collins (2015). Experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the Bible, a considerably smaller corpus than the Europarl corpus used in previous work. Results using the Europarl corpus as a source of translation data show additional improvements over the results of Rasooli and Collins (2015). We conclude with results on 38 datasets from the Universal Dependencies corpora.",
"title": ""
},
{
"docid": "08ecf17772853fe198c96837d43cf572",
"text": "Long-lasting insecticidal nets (LLINs) and indoor residual spraying (IRS) interventions can reduce malaria transmission by targeting mosquitoes when they feed upon sleeping humans and/or when they rest inside houses, livestock shelters or other man-made structures. However, many malaria vector species can maintain robust transmission, despite high coverage of LLINs/IRS containing insecticides to which they are physiologically fully susceptible, because they exhibit one or more behaviours that define the biological limits of achievable impact with these interventions: (1) natural or insecticide-induced avoidance of contact with treated surfaces within houses and early exit from them, minimizing exposure hazard of vectors which feed indoors upon humans, (2) feeding upon humans when they are active and unprotected outdoors, attenuating personal protection and any consequent community-wide suppression of transmission, (3) feeding upon animals, minimizing contact with insecticides targeted at humans or houses, (4) resting outdoors, away from insecticide-treated surfaces of nets, walls and roofs. Residual malaria transmission is therefore defined as all forms of transmission that can persist after achieving full population-wide coverage with effective LLIN and/or IRS containing active ingredients to which local vector populations are fully susceptible. Residual transmission is sufficiently intense across most of the tropics to render malaria elimination infeasible without new or improved vector control methods. Many novel or improved vector control strategies to address residual transmission are emerging that either (1) enhance control of adult vectors that enter houses to feed and/or rest by killing, repelling or excluding them, (2) kill or repel adult mosquitoes when they attack people outdoors, (3) kill adult mosquitoes when they attack livestock, (4) kill adult mosquitoes when they feed upon sugar, or (5) kill immature mosquitoes at aquatic habitats. However, none of these options has sufficient supporting evidence to justify full-scale programmatic implementation so concerted investment in their rigorous selection, development and evaluation is required over the coming decade to enable control and, ultimately, elimination of residual malaria transmission. In the meantime, national programmes may assess options for addressing residual transmission under programmatic conditions through exploratory pilot studies with strong monitoring, evaluation and operational research components, similarly to the Onchocerciasis Control Programme.",
"title": ""
},
{
"docid": "8c31d750a503929a0776ae3b1e1d9f41",
"text": "Topic segmentation and labeling is often considered a prerequisite for higher-level conversation analysis and has been shown to be useful in many Natural Language Processing (NLP) applications. We present two new corpora of email and blog conversations annotated with topics, and evaluate annotator reliability for the segmentation and labeling tasks in these asynchronous conversations. We propose a complete computational framework for topic segmentation and labeling in asynchronous conversations. Our approach extends state-of-the-art methods by considering a fine-grained structure of an asynchronous conversation, along with other conversational features by applying recent graph-based methods for NLP. For topic segmentation, we propose two novel unsupervised models that exploit the fine-grained conversational structure, and a novel graph-theoretic supervised model that combines lexical, conversational and topic features. For topic labeling, we propose two novel (unsupervised) random walk models that respectively capture conversation specific clues from two different sources: the leading sentences and the fine-grained conversational structure. Empirical evaluation shows that the segmentation and the labeling performed by our best models beat the state-of-the-art, and are highly correlated with human annotations.",
"title": ""
},
{
"docid": "b2589260e4e8d26df598bb873646b7ec",
"text": "In this paper, the performance of a topological-metric visual-path-following framework is investigated in different environments. The framework relies on a monocular camera as the only sensing modality. The path is represented as a series of reference images such that each neighboring pair contains a number of common landmarks. Local 3-D geometries are reconstructed between the neighboring reference images to achieve fast feature prediction. This condition allows recovery from tracking failures. During navigation, the robot is controlled using image-based visual servoing. The focus of this paper is on the results from a number of experiments that were conducted in different environments, lighting conditions, and seasons. The experiments with a robot car show that the framework is robust to moving objects and moderate illumination changes. It is also shown that the system is capable of online path learning.",
"title": ""
},
{
"docid": "14a3e0f52760802ae74a21cd0cb66507",
"text": "Credit scoring has been regarded as a core appraisal tool of different institutions during the last few decades, and has been widely investigated in different areas, such as finance and accounting. Different scoring techniques are being used in areas of classification and prediction, where statistical techniques have conventionally been used. Both sophisticated and traditional techniques, as well as performance evaluation criteria are investigated in the literature. The principal aim of this paper is to carry out a comprehensive review of 214 articles/books/theses that involve credit scoring applications in various areas, in general, but primarily in finance and banking, in particular. This paper also aims to investigate how credit scoring has developed in importance, and to identify the key determinants in the construction of a scoring model, by means of a widespread review of different statistical techniques and performance evaluation criteria. Our review of literature revealed that there is no overall best statistical technique used in building scoring models and the best technique for all circumstances does not yet exist. Also, the applications of the scoring methodologies have been widely extended to include different areas, and this subsequently can help decision makers, particularly in banking, to predict their clients‟ behaviour. Finally, this paper also suggests a number of directions for future research.",
"title": ""
},
{
"docid": "e3bbd0ccc00cd545f11d05ab1421ed01",
"text": "The expectation-confirmation model (ECM) of IT continuance is a model for investigating continued information technology (IT) usage behavior. This paper reports on a study that attempts to expand the set of post-adoption beliefs in the ECM, in order to extend the application of the ECM beyond an instrumental focus. The expanded ECM, incorporating the post-adoption beliefs of perceived usefulness, perceived enjoyment and perceived ease of use, was empirically validated with data collected from an on-line survey of 811 existing users of mobile Internet services. The data analysis showed that the expanded ECM has good explanatory power (R 1⁄4 57:6% of continued IT usage intention and R 1⁄4 67:8% of satisfaction), with all paths supported. Hence, the expanded ECM can provide supplementary information that is relevant for understanding continued IT usage. The significant effects of post-adoption perceived ease of use and perceived enjoyment signify that the nature of the IT can be an important boundary condition in understanding the continued IT usage behavior. At a practical level, the expanded ECM presents IT product/service providers with deeper insights into how to address IT users’ satisfaction and continued patronage. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b8dae71335b9c6caa95bed38d32f102a",
"text": "Mining frequent closed itemsets provides complete and non-redundant results for frequent pattern analysis. Extensive studies have proposed various strategies for efficient frequent closed itemset mining, such as depth-first search vs. breadthfirst search, vertical formats vs. horizontal formats, tree-structure vs. other data structures, top-down vs. bottom-up traversal, pseudo projection vs. physical projection of conditional database, etc. It is the right time to ask \"what are the pros and cons of the strategies?\" and \"what and how can we pick and integrate the best strategies to achieve higher performance in general cases?\"In this study, we answer the above questions by a systematic study of the search strategies and develop a winning algorithm CLOSET+. CLOSET+ integrates the advantages of the previously proposed effective strategies as well as some ones newly developed here. A thorough performance study on synthetic and real data sets has shown the advantages of the strategies and the improvement of CLOSET+ over existing mining algorithms, including CLOSET, CHARM and OP, in terms of runtime, memory usage and scalability.",
"title": ""
},
{
"docid": "0dfd5345c2dc3fe047dcc635760ffedd",
"text": "This paper presents a fast, joint spatial- and Doppler velocity-based, probabilistic approach for ego-motion estimation for single and multiple radar-equipped robots. The normal distribution transform is used for the fast and accurate position matching of consecutive radar detections. This registration technique is successfully applied to laser-based scan matching. To overcome discontinuities of the original normal distribution approach, an appropriate clustering technique provides a globally smooth mixed-Gaussian representation. It is shown how this matching approach can be significantly improved by taking the Doppler information into account. The Doppler information is used in a density-based approach to extend the position matching to a joint likelihood optimization function. Then, the estimated ego-motion maximizes this function. Large-scale real world experiments in an urban environment using a 77 GHz radar show the robust and accurate ego-motion estimation of the proposed algorithm. In the experiments, comparisons are made to state-of-the-art algorithms, the vehicle odometry, and a high-precision inertial measurement unit.",
"title": ""
},
{
"docid": "c8f97cc28c124f08c161898f1c1023ad",
"text": "Nonnegative matrix factorization (NMF) is a widely-used method for low-rank approximation (LRA) of a nonnegative matrix (matrix with only nonnegative entries), where nonnegativity constraints are imposed on factor matrices in the decomposition. A large body of past work on NMF has focused on the case where the data matrix is complete. In practice, however, we often encounter with an incomplete data matrix where some entries are missing (e.g., a user-rating matrix). Weighted low-rank approximation (WLRA) has been studied to handle incomplete data matrix. However, there is only few work on weighted nonnegative matrix factorization (WNMF) that is WLRA with nonnegativity constraints. Existing WNMF methods are limited to a direct extension of NMF multiplicative updates, which suffer from slow convergence while the implementation is easy. In this paper we develop relatively fast and scalable algorithms for WNMF, borrowed from well-studied optimization techniques: (1) alternating nonnegative least squares; (2) generalized expectation maximization. Numerical experiments on MovieLens and Netflix prize datasets confirm the useful behavior of our methods, in a task of collaborative prediction.",
"title": ""
},
{
"docid": "f83017ad2454c465d19f70f8ba995e95",
"text": "The origins of life on Earth required the establishment of self-replicating chemical systems capable of maintaining and evolving biological information. In an RNA world, single self-replicating RNAs would have faced the extreme challenge of possessing a mutation rate low enough both to sustain their own information and to compete successfully against molecular parasites with limited evolvability. Thus theoretical analyses suggest that networks of interacting molecules were more likely to develop and sustain life-like behaviour. Here we show that mixtures of RNA fragments that self-assemble into self-replicating ribozymes spontaneously form cooperative catalytic cycles and networks. We find that a specific three-membered network has highly cooperative growth dynamics. When such cooperative networks are competed directly against selfish autocatalytic cycles, the former grow faster, indicating an intrinsic ability of RNA populations to evolve greater complexity through cooperation. We can observe the evolvability of networks through in vitro selection. Our experiments highlight the advantages of cooperative behaviour even at the molecular stages of nascent life.",
"title": ""
},
{
"docid": "87037d2da4c9fcf346023562a46773eb",
"text": "From the perspective of kinematics, dual-arm manipulation in robots differs from single-arm manipulation in that it requires high dexterity in a specific region of the manipulator’s workspace. This feature has motivated research on the specialized design of manipulators for dualarm robots. These recently introduced robots often utilize a shoulder structure with a tilted angle of some magnitude. The tilted shoulder yields better kinematic performance for dual-arm manipulation, such as a wider common workspace for each arm. However, this method tends to reduce total workspace volume, which results in lower kinematic performance for single-arm tasks in the outer region of the workspace. To overcome this trade-off, the authors of this study propose a design for a dual-arm robot with a biologically inspired four degree-of-freedom shoulder mechanism. This study analyzes the kinematic performance of the proposed design and compares it with that of a conventional dual-arm robot from the perspective of workspace and single-/dual-arm manipulability. The comparative analysis Electronic supplementary material The online version of this article (doi:10.1007/s11370-017-0215-z) contains supplementary material, which is available to authorized users. B Ji-Hun Bae [email protected] Dong-Hyuk Lee [email protected] Hyeonjun Park [email protected] Jae-Han Park [email protected] Moon-Hong Baeg [email protected] 1 Robot Control and Cognition Lab., Robot R&D Group, Korea Institute of Industrial Technology (KITECH), Ansan, Korea revealed that the proposed structure can significantly enhance singleand dual-arm kinematic performance in comparison with conventional dual-arm structures. This superior kinematic performance was verified through experiments, which showed that the proposed method required shorter settling time and trajectory-following performance than the conventional dual-arm robot.",
"title": ""
},
{
"docid": "3b2a3fc20a03d829e4c019fbdbc0f2ae",
"text": "First cars equipped with 24 GHz short range radar (SRR) systems in combination with 77 GHz long range radar (LRR) system enter the market in autumn 2005 enabling new safety and comfort functions. In Europe the 24 GHz ultra wideband (UWB) frequency band is temporally allowed only till end of June 2013 with a limitation of the car pare penetration of 7%. From middle of 2013 new cars have to be equipped with SRR sensors which operate in the frequency band of 79 GHz (77 GHz to 81 GHz). The development of the 79 GHz SRR technology within the German government (BMBF) funded project KOKON is described",
"title": ""
},
{
"docid": "c2da0c999b00aa25753dee4e5d4521b7",
"text": "Quality degradation and computational complexity are the major challenges for image interpolation algorithms. Advanced interpolation techniques achieve to preserve fine image details but typically suffer from lower computational efficiency, while simpler interpolation techniques lead to lower quality images. In this paper, we propose an edge preserving technique based on inverse gradient weights as well as pixel locations for interpolation. Experimental results confirm that the proposed algorithm exhibits better image quality compared to conventional algorithms. At the same time, our approach is shown to be faster than several advanced edge preserving interpolation algorithms.",
"title": ""
},
{
"docid": "d658b95cc9dc81d0dbb3918795ccab50",
"text": "A brain–computer interface (BCI) is a communication channel which does not depend on the brain’s normal output pathways of peripheral nerves and muscles [1–3]. It supplies paralyzed patients with a new approach to communicate with the environment. Among various brain monitoring methods employed in current BCI research, electroencephalogram (EEG) is the main interest due to its advantages of low cost, convenient operation and non-invasiveness. In present-day EEG-based BCIs, the following signals have been paid much attention: visual evoked potential (VEP), sensorimotor mu/beta rhythms, P300 evoked potential, slow cortical potential (SCP), and movement-related cortical potential (MRCP). Details about these signals can be found in chapter “Brain Signals for Brain–Computer Interfaces”. These systems offer some practical solutions (e.g., cursor movement and word processing) for patients with motor disabilities. In this chapter, practical designs of several BCIs developed in Tsinghua University will be introduced. First of all, we will propose the paradigm of BCIs based on the modulation of EEG rhythms and challenges confronting practical system designs. In Sect. 2, modulation and demodulation methods of EEG rhythms will be further explained. Furthermore, practical designs of a VEP-based BCI and a motor imagery based BCI will be described in Sect. 3. Finally, Sect. 4 will present some real-life application demos using these practical BCI systems.",
"title": ""
},
{
"docid": "5d17ff397a09da24945bb549a8bfd3ec",
"text": "For applications of 5G (5th generation mobile networks) communication systems, dual-polarized patch array antenna operating at 28.5 GHz is designed on the package substrate. To verify the radiation performance of designed antenna itself, a test package including two patch antennas is also design and its scattering parameters were measured. Using a large height of dielectric materials, 1.5 ∼ 2.0 GHz of antenna bandwidth is achieved which is wide enough. Besides, the dielectric constants are reduced to reflect variances of material properties in the higher frequency region. Measured results of the test package show a good performance at the operating frequency, indicating that the fabricated antenna package will perform well, either. In the future work, manufacturing variances will be investigated further.",
"title": ""
}
] |
scidocsrr
|
ad3929e06206e98baf0d78b76e9d66a7
|
A Novel Cogging Torque Reduction Method for Interior Type Permanent Magnet Motor
|
[
{
"docid": "2c92d42311f9708b7cb40f34551315e0",
"text": "This work characterizes electromagnetic excitation forces in interior permanent-magnet (IPM) brushless direct current (BLDC) motors and investigates their effects on noise and vibration. First, the electromagnetic excitations are classified into three sources: 1) so-called cogging torque, for which we propose an efficient technique of computation that takes into account saturation effects as a function of rotor position; 2) ripples of mutual and reluctance torque, for which we develop an equation to characterize the combination of space harmonics of inductances and flux linkages related to permanent magnets and time harmonics of current; and 3) fluctuation of attractive forces in the radial direction between the stator and rotor, for which we analyze contributions of electric currents as well as permanent magnets by the finite-element method. Then, the paper reports on an experimental investigation of influences of structural dynamic characteristics such as natural frequencies and mode shapes, as well as electromagnetic excitation forces, on noise and vibration in an IPM motor used in washing machines.",
"title": ""
}
] |
[
{
"docid": "ed20f85a638c4e0079bac55db1d52d01",
"text": "Cloaking is a common 'bait-and-switch' technique used to hide the true nature of a Web site by delivering blatantly different semantic content to different user segments. It is often used in search engine optimization (SEO) to obtain user traffic illegitimately for scams. In this paper, we measure and characterize the prevalence of cloaking on different search engines, how this behavior changes for targeted versus untargeted advertising and ultimately the response to site cloaking by search engine providers. Using a custom crawler, called Dagger, we track both popular search terms (e.g., as identified by Google, Alexa and Twitter) and targeted keywords (focused on pharmaceutical products) for over five months, identifying when distinct results were provided to crawlers and browsers. We further track the lifetime of cloaked search results as well as the sites they point to, demonstrating that cloakers can expect to maintain their pages in search results for several days on popular search engines and maintain the pages themselves for longer still.",
"title": ""
},
{
"docid": "bab36592a2f3df97a8580169ad92adef",
"text": "Two hundred and nine pupils were randomly allocated to either a cognitive behaviourally based stress management intervention (SMI) group, or a non-intervention control group. Mood and motivation measures were administered pre and post intervention. Standardized examinations were taken 8-10 weeks later. As hypothesized, results indicated that an increase in the functionality of pupils' cognitions served as the mechanism by which mental health improved in the SMI group. In contrast, the control group demonstrated no such improvements. Also, as predicted, an increase in motivation accounted for the SMI group's significantly better performance on the standardized, academic assessments that comprise the United Kingdom's General Certificate of Secondary Education. Indeed, the magnitude of this enhanced performance was, on average, one-letter grade. Discussion focuses on the theoretical and practical implications of these findings.",
"title": ""
},
{
"docid": "89bec90bd6715a3907fba9f0f7655158",
"text": "Long text brings a big challenge to neural network based text matching approaches due to their complicated structures. To tackle the challenge, we propose a knowledge enhanced hybrid neural network (KEHNN) that leverages prior knowledge to identify useful information and filter out noise in long text and performs matching from multiple perspectives. The model fuses prior knowledge into word representations by knowledge gates and establishes three matching channels with words, sequential structures of text given by Gated Recurrent Units (GRUs), and knowledge enhanced representations. The three channels are processed by a convolutional neural network to generate high level features for matching, and the features are synthesized as a matching score by a multilayer perceptron. In this paper, we focus on exploring the use of taxonomy knowledge for text matching. Evaluation results from extensive experiments on public data sets of question answering and conversation show that KEHNN can significantly outperform state-of-the-art matching models and particularly improve matching accuracy on pairs with long text.",
"title": ""
},
{
"docid": "3eb022b3ec1517bc54670a68c8a14106",
"text": "Waste as a management issue has been evident for over four millennia. Disposal of waste to the biosphere has given way to thinking about, and trying to implement, an integrated waste management approach. In 1996 the United Nations Environmental Programme (UNEP) defined 'integrated waste management' as 'a framework of reference for designing and implementing new waste management systems and for analysing and optimising existing systems'. In this paper the concept of integrated waste management as defined by UNEP is considered, along with the parameters that constitute integrated waste management. The examples used are put into four categories: (1) integration within a single medium (solid, aqueous or atmospheric wastes) by considering alternative waste management options, (2) multi-media integration (solid, aqueous, atmospheric and energy wastes) by considering waste management options that can be applied to more than one medium, (3) tools (regulatory, economic, voluntary and informational) and (4) agents (governmental bodies (local and national), businesses and the community). This evaluation allows guidelines for enhancing success: (1) as experience increases, it is possible to deal with a greater complexity; and (2) integrated waste management requires a holistic approach, which encompasses a life cycle understanding of products and services. This in turn requires different specialisms to be involved in the instigation and analysis of an integrated waste management system. Taken together these advance the path to sustainability.",
"title": ""
},
{
"docid": "ef365e432e771c812300b654ceaff419",
"text": "OBJECTIVE\nPretreatment of myoinositol is a very new method that was evaluated in multiple small studies to manage poor ovarian response in assisted reproduction. This study was to determine the efficacy of myoinositol supplement in infertile women undergoing ovulation induction for intracytoplasmic sperm injection (ICSI) or in vitro fertilization embryo transfer (IVF-ET).\n\n\nMETHODS\nA meta-analysis and systematic review of published articles evaluating the efficacy of myo-inositol in patients undergoing ovulation induction for ICSI or IVF-ET was performed.\n\n\nRESULTS\nSeven trials with 935 women were included. Myoinositol supplement was associated with significantly improved clinical pregnancy rate [95% confidence interval (CI), 1.04-1.96; P = .03] and abortion rate (95% CI, 0.08-0.50; P = .0006). Meanwhile, Grade 1 embryos proportion (95% CI, 1.10-2.74; P = .02), germinal vescicle and degenerated oocytes retrieved (95% CI, 0.11-0.86; P = .02), and total amount of ovulation drugs (95% CI, -591.69 to -210.39; P = .001) were also improved in favor of myo-inositol. There were no significant difference in total oocytes retrieved, MII stage oocytes retrieved, stimulation days, and E2 peak level.\n\n\nCONCLUSIONS\nMyoinositol supplement increase clinical pregnancy rate in infertile women undergoing ovulation induction for ICSI or IVF-ET. It may improve the quality of embryos, and reduce the unsuitable oocytes and required amount of stimulation drugs.",
"title": ""
},
{
"docid": "394e20d6fd7f69ce2f5308951244328f",
"text": "Digital multimedia such as images and videos are prevalent on today’s internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose a new framework to perform multimedia forensics by using compact side information to reconstruct the processing history of a multimedia document. We refer to this framework as FASHION, standing for Forensic hASH for informatION assurance. As a first step in the modular design for FASHION, we propose new algorithms based on Radon transform and scale space theory to effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. The FASHION framework is designed to answer a much broader range of questions regarding the processing history of multimedia data than simple binary decision from robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.",
"title": ""
},
{
"docid": "7ecf315d70e6d438ef90ec76b192b65f",
"text": "Stress is a common condition, a response to a physical threat or psychological distress, that generates a host of chemical and hormonal reactions in the body. In essence, the body prepares to fight or fiee, pumping more blood to the heart and muscles and shutting down all nonessential functions. As a temporary state, this reaction serves the body well to defend itself When the stress reaction is prolonged, however, the normal physical functions that have in response either been exaggerated or shut down become dysfunctional. Many have noted the benefits of exercise in diminishing the stress response, and a host of studies points to these benefits. Yoga, too, has been recommended and studied in relationship to stress, although the studies are less scientifically replicable. Nonetheless, several researchers claim highly beneficial results from Yoga practice in alleviating stress and its effects. The practices recommended range from intense to moderate to relaxed asana sequences, along yNith.pranayama and meditation. In all these approaches to dealing with stress, one common element stands out: The process is as important as the activity undertaken. Because it fosters self-awareness. Yoga is a promising approach for dealing with the stress response. Yoga and the Stress Response Stress has become a common catchword in our society to indicate a host of difficulties, both as cause and effect. The American Academy of Family Physicians has noted that stress-related symptoms prompt two-thirds of the office visits to family physicians.' Exercise and alternative therapies are now commonly prescribed for stress-related complaints and illness. Even a recent issue of Consumer Reports suggests Yoga for stress relief.̂ Many books and articles claim, as does Dr. Susan Lark, that practicing Yoga will \"provide effective relief of anxiety and stress.\"^ But is this an accurate promise? What Is the Stress Response? A review of the current thinking on stress reveals that the process is both biochemical and psychological. A very good summary of research on the stress response is contained in Robert Sapolsky's Why Zebras Don't Get",
"title": ""
},
{
"docid": "4e5dd7032d9b3fc4563c893a26867e93",
"text": "User interaction in visual analytic systems is critical to enabling visual data exploration. Through interacting with visualizations, users engage in sensemaking, a process of developing and understanding relationships within datasets through foraging and synthesis. For example, two-dimensional layouts of high-dimensional data can be generated by dimension reduction models, and provide users with an overview of the relationships between information. However, exploring such spatializations can require expertise with the internal mechanisms and parameters of these models. The core contribution of this work is semantic interaction, capable of steering such models without requiring expertise in dimension reduction models, but instead leveraging the domain expertise of the user. Semantic interaction infers the analytical reasoning of the user with model updates, steering the dimension reduction model for visual data exploration. As such, it is an approach to user interaction that leverages interactions designed for synthesis, and couples them with the underlying mathematical model to provide computational support for foraging. As a result, semantic interaction performs incremental model learning to enable synergy between the user’s insights and the mathematical model. The contributions of this work are organized by providing a description of the principles of semantic interaction, providing design guidelines through the development of a visual analytic prototype, ForceSPIRE, and the evaluation of the impact of semantic interaction on the analytic process. The positive results of semantic interaction open a fundamentally new design space for designing user interactions in visual analytic systems. This research was funded in part by the National Science Foundation, CCF-0937071 and CCF-0937133, the Institute for Critical Technology and Applied Science at Virginia Tech, and the National Geospatial-Intelligence Agency contract #HMI1582-05-1-2001.",
"title": ""
},
{
"docid": "75a9715ce9eaffaa43df5470ad7cacca",
"text": "Resting frontal electroencephalographic (EEG) asymmetry has been hypothesized as a marker of risk for major depressive disorder (MDD), but the extant literature is based predominately on female samples. Resting frontal asymmetry was assessed on 4 occasions within a 2-week period in 306 individuals aged 18-34 (31% male) with (n = 143) and without (n = 163) lifetime MDD as defined by the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (American Psychiatric Association, 1994). Lifetime MDD was linked to relatively less left frontal activity for both sexes using a current source density (CSD) reference, findings that were not accounted for solely by current MDD status or current depression severity, suggesting that CSD-referenced EEG asymmetry is a possible endophenotype for depression. In contrast, results for average and linked mastoid references were less consistent but demonstrated a link between less left frontal activity and current depression severity in women.",
"title": ""
},
{
"docid": "532980d1216f9f10332cc13b6a093fb4",
"text": "Several studies on sentence processing suggest that the mental lexicon keeps track of the mutual expectations between words. Current DSMs, however, represent context words as separate features, which causes the loss of important information for word expectations, such as word order and interrelations. In this paper, we present a DSM which addresses the issue by defining verb contexts as joint dependencies. We test our representation in a verb similarity task on two datasets, showing that joint contexts are more efficient than single dependencies, even with a relatively small amount of training data.",
"title": ""
},
{
"docid": "8eec38aad55b37482d6f93a5f909b945",
"text": "Linking or matching databases is becoming increasingly important in many data mining projects, as linked data can contain information that is not available otherwise, or that would be too expensive to collect manually. A main challenge when linking large databases is the classification of the compared record pairs into matches and non-matches. In traditional record linkage, classification thresholds have to be set either manually or using an EM-based approach. More recently developed classification methods are mainly based on supervised machine learning techniques and thus require training data, which is often not available in real world situations or has to be prepared manually. In this paper, a novel two-step approach to record pair classification is presented. In a first step, example training data of high quality is generated automatically, and then used in a second step to train a supervised classifier. Initial experimental results on both real and synthetic data show that this approach can outperform traditional unsupervised clustering, and even achieve linkage quality almost as good as fully supervised techniques.",
"title": ""
},
{
"docid": "0bcb2fdf59b88fca5760bfe456d74116",
"text": "A good distance metric is crucial for unsupervised learning from high-dimensional data. To learn a metric without any constraint or class label information, most unsupervised metric learning algorithms appeal to projecting observed data onto a low-dimensional manifold, where geometric relationships such as local or global pairwise distances are preserved. However, the projection may not necessarily improve the separability of the data, which is the desirable outcome of clustering. In this paper, we propose a novel unsupervised adaptive metric learning algorithm, called AML, which performs clustering and distance metric learning simultaneously. AML projects the data onto a low-dimensional manifold, where the separability of the data is maximized. We show that the joint clustering and distance metric learning can be formulated as a trace maximization problem, which can be solved via an iterative procedure in the EM framework. Experimental results on a collection of benchmark data sets demonstrated the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "ce04dd56c71acc8752b1965fd89d5c35",
"text": "Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance of recognition granularity, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions. We compute these cues using CNNs trained on a newly annotated dataset of stereo images and integrate them into a CRF-based model for robust 3D scene flow estimation - an approach we term Instance Scene Flow. We analyze the importance of each recognition cue in an ablation study and observe that the instance segmentation cue is by far strongest, in our setting. We demonstrate the effectiveness of our method on the challenging KITTI 2015 scene flow benchmark where we achieve state-of-the-art performance at the time of submission.",
"title": ""
},
{
"docid": "6264a8e43070f686375150b4beadaee7",
"text": "A control law for an integrated power/attitude control system (IPACS) for a satellite is presented. Four or more energy/momentum wheels in an arbitrary noncoplanar con guration and a set of three thrusters are used to implement the torque inputs. The energy/momentum wheels are used as attitude-control actuators, as well as an energy storage mechanism, providing power to the spacecraft. In that respect, they can replace the currently used heavy chemical batteries. The thrusters are used to implement the torques for large and fast (slew) maneuvers during the attitude-initialization and target-acquisition phases and to implement the momentum management strategies. The energy/momentum wheels are used to provide the reference-tracking torques and the torques for spinning up or down the wheels for storing or releasing kinetic energy. The controller published in a previous work by the authors is adopted here for the attitude-tracking function of the wheels. Power tracking for charging and discharging the wheels is added to complete the IPACS framework. The torques applied by the energy/momentum wheels are decomposed into two spaces that are orthogonal to each other, with the attitude-control torques and power-tracking torques in each space. This control law can be easily incorporated in an IPACS system onboard a satellite. The possibility of the occurrence of singularities, in which no arbitrary energy pro le can be tracked, is studied for a generic wheel cluster con guration. A standard momentum management scheme is considered to null the total angular momentum of the wheels so as to minimize the gyroscopic effects and prevent the singularity from occurring. A numerical example for a satellite in a low Earth near-polar orbit is provided to test the proposed IPACS algorithm. The satellite’s boresight axis is required to track a ground station, and the satellite is required to rotate about its boresight axis so that the solar panel axis is perpendicular to the satellite–sun vector.",
"title": ""
},
{
"docid": "a1b6fc8362fab0c062ad31a205e74898",
"text": "Air-gapped computers are disconnected from the Internet physically and logically. This measure is taken in order to prevent the leakage of sensitive data from secured networks. It has been shown that malware can exfiltrate data from air-gapped computers by transmitting ultrasonic signals via the computer’s speakers. However, such acoustic communication relies on the availability of speakers on a computer.",
"title": ""
},
{
"docid": "8fdb49c00aa8771ae69802ee1b16d664",
"text": "A deep Boltzmann machine (DBM) is a recently introduced Markov random field model that has multiple layers of hidden units. It has been shown empirically that it is difficult to train a DBM with approximate maximum-likelihood learning using the stochastic gradient unlike its simpler special case, restricted Boltzmann machines (RBM). In this paper, we propose a novel pretraining algorithm that consists of two stages; obtaining approximate posterior distributions over hidden units from a simpler model and maximizing the variational lower-bound given the fixed hidden posterior distributions. We show empirically that the proposed method overcomes the difficulty in training DBMs from randomly initialized parameters and results in a better, or comparable, generative model when compared to the conventional pretraining algorithm.",
"title": ""
},
{
"docid": "ce35e9efdddd5da0109794645270be8f",
"text": "The objective of generalized sampling expansion (GSE) is the reconstruction of an unknown, continuously defined function f (t) from samples of the responses from M linear time-invariant (LTI) systems that are each sampled using the 1/Mth Nyquist rate. In this paper, we investigate the GSE for lowpass and bandpass signals with multiple sampling rates in the fractional Fourier transform (FRFT) domain. First, we propose an improvement of Papoulis' GSE, which has multiple sampling rates in the FRFT domain. Based on the proposed GSE, we derive the periodic nonuniform sampling scheme and the derivative interpolation method by designing different fractional filters and selecting specific sampling rates. In addition, the Papoulis GSE and the previous GSE associated with FRFT are shown to be special instances of our results. Second, we address the problem of the GSE of fractional bandpass signals. A new GSE for fractional bandpass signals with equal sampling rates is derived. We show that the restriction of an even number of channels in the GSE for fractional bandpass signals is unnecessary, and perfect signal reconstruction is possible for any arbitrary number of channels. Further, we develop the GSE for a fractional bandpass signal with multiple sampling rates. Lastly, we discuss the application of the proposed method in the context of single-image super-resolution reconstruction based on GSE. Illustrations and simulations are presented to verify the validity and effectiveness of the proposed results.",
"title": ""
},
{
"docid": "0150caaaa121afdbf04dbf496d3770c3",
"text": "The use of interactive technologies to aid in the implementation of smart cities has a significant potential to support disabled users in performing their activities as citizens. In this study, we present an investigation of the accessibility of a sample of 10 mobile Android™ applications of Brazilian municipalities, two from each of the five big geographical regions of the country, focusing especially on users with visual disabilities. The results showed that many of the applications were not in accordance with accessibility guidelines, with an average of 57 instances of violations and an average of 11.6 different criteria violated per application. The main problems included issues like not addressing labelling of non-textual content, headings, identifying user location, colour contrast, enabling users to interact using screen reader gestures, focus visibility and lack of adaptation of text contained in image. Although the growth in mobile applications for has boosted the possibilities aligned with the principles of smart cities, there is a strong need for including accessibility in the design of such applications in order for disabled people to benefit from the potential they can have for their lives.",
"title": ""
},
{
"docid": "a9440a3eb37360176f5ee792da1dbdf3",
"text": "Background: Test quality is a prerequisite for achieving production system quality. While the concept of quality is multidimensional, most of the effort in testing context hasbeen channelled towards measuring test effectiveness. Objective: While effectiveness of tests is certainly important, we aim to identify a core list of testing principles that also address other quality facets of testing, and to discuss how they can be quantified as indicators of test quality. Method: We have conducted a two-day workshop with our industry partners to come up with a list of relevant principles and best practices expected to result in high quality tests. We then utilised our academic and industrial training materials together with recommendations in practitioner oriented testing books to refine the list. We surveyed existing literature for potential metrics to quantify identified principles. Results: We have identified a list of 15 testing principles to capture the essence of testing goals and best practices from quality perspective. Eight principles do not map toexisting test smells and we propose metrics for six of those. Further, we have identified additional potential metrics for the seven principles that partially map to test smells. Conclusion: We provide a core list of testing principles along with a discussion of possible ways to quantify them for assessing goodness of tests. We believe that our work wouldbe useful for practitioners in assessing the quality of their tests from multiple perspectives including but not limited to maintainability, comprehension and simplicity.",
"title": ""
},
{
"docid": "c09391a25defcb797a7c8da3f429fafa",
"text": "BACKGROUND\nTo examine the postulated relationship between Ambulatory Care Sensitive Conditions (ACSC) and Primary Health Care (PHC) in the US context for the European context, in order to develop an ACSC list as markers of PHC effectiveness and to specify which PHC activities are primarily responsible for reducing hospitalization rates.\n\n\nMETHODS\nTo apply the criteria proposed by Solberg and Weissman to obtain a list of codes of ACSC and to consider the PHC intervention according to a panel of experts. Five selection criteria: i) existence of prior studies; ii) hospitalization rate at least 1/10,000 or 'risky health problem'; iii) clarity in definition and coding; iv) potentially avoidable hospitalization through PHC; v) hospitalization necessary when health problem occurs. Fulfilment of all criteria was required for developing the final ACSC list. A sample of 248,050 discharges corresponding to 2,248,976 inhabitants of Catalonia in 1996 provided hospitalization rate data. A Delphi survey was performed with a group of 44 experts reviewing 113 ICD diagnostic codes (International Classification of Diseases, 9th Revision, Clinical Modification), previously considered to be ACSC.\n\n\nRESULTS\nThe five criteria selected 61 ICD as a core list of ACSC codes and 90 ICD for an expanded list.\n\n\nCONCLUSIONS\nA core list of ACSC as markers of PHC effectiveness identifies health conditions amenable to specific aspects of PHC and minimizes the limitations attributable to variations in hospital admission policies. An expanded list should be useful to evaluate global PHC performance and to analyse market responsibility for ACSC by PHC and Specialist Care.",
"title": ""
}
] |
scidocsrr
|
ec8ca1843aede3eba3652535c2ba7e56
|
Arithmetic Coding for Data Compression
|
[
{
"docid": "bbf581230ec60c2402651d51e3a37211",
"text": "The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.",
"title": ""
}
] |
[
{
"docid": "eb761eb499b2dc82f7f2a8a8a5ff64a7",
"text": "We consider the situation in which digital data is to be reliably transmitted over a discrete, memoryless channel (dmc) that is subjected to a wire-tap at the receiver. We assume that the wire-tapper views the channel output via a second dmc). Encoding by the transmitter and decoding by the receiver are permitted. However, the code books used in these operations are assumed to be known by the wire-tapper. The designer attempts to build the encoder-decoder in such a way as to maximize the transmission rate R, and the equivocation d of the data as seen by the wire-tapper. In this paper, we find the trade-off curve between R and d, assuming essentially perfect (“error-free”) transmission. In particular, if d is equal to Hs, the entropy of the data source, then we consider that the transmission is accomplished in perfect secrecy. Our results imply that there exists a Cs > 0, such that reliable transmission at rates up to Cs is possible in approximately perfect secrecy.",
"title": ""
},
{
"docid": "69d3c943755734903b9266ca2bd2fad1",
"text": "This paper describes experiments in Machine Learning for text classification using a new representation of text based on WordNet hypernyms. Six binary classification tasks of varying diff iculty are defined, and the Ripper system is used to produce discrimination rules for each task using the new hypernym density representation. Rules are also produced with the commonly used bag-of-words representation, incorporating no knowledge from WordNet. Experiments show that for some of the more diff icult tasks the hypernym density representation leads to significantly more accurate and more comprehensible rules.",
"title": ""
},
{
"docid": "c3f25271d25590bf76b36fee4043d227",
"text": "Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.",
"title": ""
},
{
"docid": "a09cb533a0a90a056857d597213efdf2",
"text": "一 引言 图像的边缘是图像的重要的特征,它给出了图像场景中物体的轮廓特征信息。当要对图 像中的某一个物体进行识别时,边缘信息是重要的可以利用的信息,例如在很多系统中采用 的模板匹配的识别算法。基于此,我们设计了一套基于 PCI Bus和 Vision Bus的可重构的机 器人视觉系统[3]。此系统能够实时的对图像进行采集,并可以通过系统实时的对图像进行 边缘的提取。 对于图像的边缘提取,采用二阶的边缘检测算子处理后要进行过零点检测,计算量很大 而且用硬件实现资源占用大且速度慢,所以在我们的视觉系统中,卷积器中选择的是一阶的 边缘检测算子。采用一阶的边缘检测算子进行卷积运算之后,仅仅需要对卷积得到的图像进 行阈值处理就可以得到图像的边缘,而阈值处理的操作用硬件实现占用资源少且速度快。由 于本视觉系统要求与应用环境下的精密装配机器人配合使用,系统的实时性要求非常高。因 此,如何对实时采集图像进行快速实时的边缘提取阈值的自动选取,是我们必须要考虑的问 题。 遗传算法是一种仿生物系统的基因进化的迭代搜索算法,其基本思想是由美国Michigan 大学的 J.Holland 教授提出的。由于遗传算法的整体寻优策略以及优化计算时不依赖梯度信 息,所以它具有很强的全局搜索能力,即对于解空间中的全局最优解有着很强的逼近能力。 它适用于问题结构不是十分清楚,总体很大,环境复杂的场合,而对于实时采集的图像进行 边缘检测阈值的选取就是此类问题。本文在对传统的遗传算法进行改进的基础上,提出了一 种对于实时采集图像进行边缘检测的阈值的自动选取方法。",
"title": ""
},
{
"docid": "a8d3a75cdc3bb43217a0120edf5025ff",
"text": "An important approach to text mining involves the use of natural-language information extraction. Information extraction (IE) distills structured data or knowledge from unstructured text by identifying references to named entities as well as stated relationships between such entities. IE systems can be used to directly extricate abstract knowledge from a text corpus, or to extract concrete data from a set of documents which can then be further analyzed with traditional data-mining techniques to discover more general patterns. We discuss methods and implemented systems for both of these approaches and summarize results on mining real text corpora of biomedical abstracts, job announcements, and product descriptions. We also discuss challenges that arise when employing current information extraction technology to discover knowledge in text.",
"title": ""
},
{
"docid": "71dd012b54ae081933bddaa60612240e",
"text": "This paper analyzes & compares four adders with different logic styles (Conventional, transmission gate, 14 transistors & GDI based technique) for transistor count, power dissipation, delay and power delay product. It is performed in virtuoso platform, using Cadence tool with available GPDK - 90nm kit. The width of NMOS and PMOS is set at 120nm and 240nm respectively. Transmission gate full adder has sheer advantage of high speed but consumes more power. GDI full adder gives reduced voltage swing not being able to pass logic 1 and logic 0 completely showing degraded output. Transmission gate full adder shows better performance in terms of delay (0.417530 ns), whereas 14T full adder shows better performance in terms of all three aspects.",
"title": ""
},
{
"docid": "79a20b9a059a2b4cc73120812c010495",
"text": "The present article summarizes the state of the art algorithms to compute the discrete Moreau envelope, and presents a new linear-time algorithm, named NEP for NonExpansive Proximal mapping. Numerical comparisons between the NEP and two existing algorithms: The Linear-time Legendre Transform (LLT) and the Parabolic Envelope (PE) algorithms are performed. Worst-case time complexity, convergence results, and examples are included. The fast Moreau envelope algorithms first factor the Moreau envelope as several one-dimensional transforms and then reduce the brute force quadratic worst-case time complexity to linear time by using either the equivalence with Fast Legendre Transform algorithms, the computation of a lower envelope of parabolas, or, in the convex case, the non expansiveness of the proximal mapping.",
"title": ""
},
{
"docid": "efe70da1a3118e26acf10aa480ad778d",
"text": "Background: Facebook (FB) is becoming an increasingly salient feature in peoples’ lives and has grown into a bastion in our current society with over 1 billion users worldwide –the majority of which are college students. However, recent studies conducted suggest that the use of Facebook may impacts individuals’ well being. Thus, this paper aimed to explore the effects of Facebook usage on adolescents’ emotional states of depression, anxiety, and stress. Method and Material: A cross sectional design was utilized in this investigation. The study population included 76 students enrolled in the Bachelor of Science in Nursing program from a government university in Samar, Philippines. Facebook Intensity Scale (FIS) and the Depression Anxiety and Stress Scale (DASS) were the primary instruments used in this study. Results: Findings indicated correlation coefficients of 0.11 (p=0.336), 0.07 (p=0.536), and 0.10 (p=0.377) between Facebook Intensity Scale (FIS) and Depression, Anxiety, and Stress scales in the DASS. Time spent on FBcorrelated significantly with depression (r=0.233, p=0.041) and anxiety (r=0.259, p=0.023). Similarly, the three emotional states (depression, anxiety, and stress) correlated significantly. Conclusions: Intensity of Facebook use is not directly related to negative emotional states. However, time spent on Facebooking increases depression and anxiety scores. Implications of the findings to the fields of counseling and psychology are discussed.",
"title": ""
},
{
"docid": "c487af41ead3ee0bc8fe6c95b356a80b",
"text": "With such a large volume of material accessible from the World Wide Web, there is an urgent need to increase our knowledge of factors in#uencing reading from screen. We investigate the e!ects of two reading speeds (normal and fast) and di!erent line lengths on comprehension, reading rate and scrolling patterns. Scrolling patterns are de\"ned as the way in which readers proceed through the text, pausing and scrolling. Comprehension and reading rate are also examined in relation to scrolling patterns to attempt to identify some characteristics of e!ective readers. We found a reduction in overall comprehension when reading fast, but the type of information recalled was not dependent on speed. A medium line length (55 characters per line) appears to support e!ective reading at normal and fast speeds. This produced the highest level of comprehension and was also read faster than short lines. Scrolling patterns associated with better comprehension (more time in pauses and more individual scrolling movements) contrast with scrolling patterns used by faster readers (less time in pauses between scrolling). Consequently, e!ective readers can only be de\"ned in relation to the aims of the reading task, which may favour either speed or accuracy. ( 2001 Academic Press",
"title": ""
},
{
"docid": "34623fb38c81af8efaf8e7073e4c43bc",
"text": "The k-means problem consists of finding k centers in R that minimize the sum of the squared distances of all points in an input set P from R to their closest respective center. Awasthi et. al. recently showed that there exists a constant ε′ > 0 such that it is NP-hard to approximate the k-means objective within a factor of 1 + ε′. We establish that the constant ε′ is at least 0.0013. For a given set of points P ⊂ R, the k-means problem consists of finding a partition of P into k clusters (C1, . . . , Ck) with corresponding centers (c1, . . . , ck) that minimize the sum of the squared distances of all points in P to their corresponding center, i.e. the quantity arg min (C1,...,Ck),(c1,...,ck) k ∑",
"title": ""
},
{
"docid": "455e3f0c6f755d78ecafcdff14c46014",
"text": "BACKGROUND\nIn neonatal and early childhood surgeries such as meningomyelocele repairs, closing deep wounds and oncological treatment, tensor fasciae lata (TFL) flaps are used. However, there are not enough data about structural properties of TFL in foetuses, which can be considered as the closest to neonates in terms of sampling. This study's main objective is to gather data about morphological structures of TFL in human foetuses to be used in newborn surgery.\n\n\nMATERIALS AND METHODS\nFifty formalin-fixed foetuses (24 male, 26 female) with gestational age ranging from 18 to 30 weeks (mean 22.94 ± 3.23 weeks) were included in the study. TFL samples were obtained by bilateral dissection and then surface area, width and length parameters were recorded. Digital callipers were used for length and width measurements whereas surface area was calculated using digital image analysis software.\n\n\nRESULTS\nNo statistically significant differences were found in terms of numerical value of parameters between sides and sexes (p > 0.05). Linear functions for TFL surface area, width, anterior and posterior margin lengths were calculated as y = -225.652 + 14.417 × age (weeks), y = -5.571 + 0.595 × age (weeks), y = -4.276 + 0.909 × age (weeks), and y = -4.468 + 0.779 × age (weeks), respectively.\n\n\nCONCLUSIONS\nLinear functions for TFL surface area, width and lengths can be used in designing TFL flap dimensions in newborn surgery. In addition, using those described linear functions can also be beneficial in prediction of TFL flap dimensions in autopsy studies.",
"title": ""
},
{
"docid": "89322e0d2b3566aeb85eeee9f505d5b2",
"text": "Parkinson's disease is a neurological disorder with evolving layers of complexity. It has long been characterised by the classical motor features of parkinsonism associated with Lewy bodies and loss of dopaminergic neurons in the substantia nigra. However, the symptomatology of Parkinson's disease is now recognised as heterogeneous, with clinically significant non-motor features. Similarly, its pathology involves extensive regions of the nervous system, various neurotransmitters, and protein aggregates other than just Lewy bodies. The cause of Parkinson's disease remains unknown, but risk of developing Parkinson's disease is no longer viewed as primarily due to environmental factors. Instead, Parkinson's disease seems to result from a complicated interplay of genetic and environmental factors affecting numerous fundamental cellular processes. The complexity of Parkinson's disease is accompanied by clinical challenges, including an inability to make a definitive diagnosis at the earliest stages of the disease and difficulties in the management of symptoms at later stages. Furthermore, there are no treatments that slow the neurodegenerative process. In this Seminar, we review these complexities and challenges of Parkinson's disease.",
"title": ""
},
{
"docid": "6033f644fb18ce848922a51d3b0000ab",
"text": "This paper tests two of the simplest and most popular trading rules moving average and trading range break, by utilitizing a very long data series, the Dow Jones index from 1897 to 1986. Standard statistical analysis is extended through the use .of bootstrap techniques. Overall our results provide strong support for the technical strategies that are explored. The returns obtained from buy (sell) signals are not consistent with the three popular null models: the random walk, the AR(I) and the GARCH-M. Consistently, buy signals generate higher returns than sell signals. Moreover, returns following sell signals are negative which is not easily explained by any of the currently existing equilibrium models. Furthermore the returns following buy signals are less volatile than returns following sell signals. The term, \"technical analysis,\" is a general heading for a myriad of trading techniques. Technical analysts attempt to forecast prices by the study of past prices and a few other related summary statistics about security trading. They believe that shifts in supply and demand can be detected in charts of market action. Technical analysis is considered by many to be the original form of investment analysis, dating back to the 1800's. It came into widespread use before the period of extensive and fully disclosed financial information, which in turn enabled the practice of fnndamental analysis to develop. In the U.S., the use of trading rules to detect patterns in stock prices is probably as old as the stock market itself. The oldest technique is attributed to Charles Dow and is traced to the late 1800's. Many of the techniques used today have been utilized for over 60 years. These techniques for discovering hidden relations in stock returns can range from extremely simple to quite elaborate. The attitude of academics towards technical analysis, until recently, is well described by Malkiel(1981): \"Obviously, I am biased against the chartist. This is not only a personal predilection, but a professional one as well. Technical analysis is anathema to, the academic world. We love to pick onit. Our bullying tactics' are prompted by two considerations: (1) the method is patently false; and (2) it's easy to pick on. And while it may seem a bit unfair to pick on such a sorry target, just remember': His your money we are trying to save.\" , Nonetheless, technical analysis has been enjoying a renaissance on Wall Street. All major brokerage firms publish technical commentary on the market and individual securities\" and many of the newsletters published by various \"experts\" are based on technical analysis. In recent years the efficient market hypothesis has come under serious siege. Various papers suggested that stock returns are not fully explained by common risk measures. A significant relationship between expected return and fundamental variables such as price-earnings ratio, market-to, book ratio and size was documented. Another group ofpapers has uncovered systematic patterns in stock returns related to various calendar periods such as the weekend effect, the tnrn-of-the-month effect, the holiday effect and the, January effect. A line of research directly related to this work provides evidence of predictability of equity returns from past returns. De Bandt and Thaler(1985), Fama and French(1986), and Poterba and Summers(1988) find negative serial correlation in returns of individual stocks aid various portfolios over three to ten year intervals. Rosenberg, Reid, and Lanstein(1985) provide evidence for the presence of predictable return reversals on a monthly basis",
"title": ""
},
{
"docid": "f4b0a7e2ab8728b682b8d399a887c3df",
"text": "This paper presents a framework for localization or grounding of phrases in images using a large collection of linguistic and visual cues.1 We model the appearance, size, and position of entity bounding boxes, adjectives that contain attribute information, and spatial relationships between pairs of entities connected by verbs or prepositions. We pay special attention to relationships between people and clothing or body part mentions, as they are useful for distinguishing individuals. We automatically learn weights for combining these cues and at test time, perform joint inference over all phrases in a caption. The resulting system produces a 4% improvement in accuracy over the state of the art on phrase localization on the Flickr30k Entities dataset [25] and a 4-10% improvement for visual relationship detection on the Stanford VRD dataset [20].",
"title": ""
},
{
"docid": "90d1d78d3d624d3cb1ecc07e8acaefd4",
"text": "Wheat straw is an abundant agricultural residue with low commercial value. An attractive alternative is utilization of wheat straw for bioethanol production. However, production costs based on the current technology are still too high, preventing commercialization of the process. In recent years, progress has been made in developing more effective pretreatment and hydrolysis processes leading to higher yield of sugars. The focus of this paper is to review the most recent advances in pretreatment, hydrolysis and fermentation of wheat straw. Based on the type of pretreatment method applied, a sugar yield of 74-99.6% of maximum theoretical was achieved after enzymatic hydrolysis of wheat straw. Various bacteria, yeasts and fungi have been investigated with the ethanol yield ranging from 65% to 99% of theoretical value. So far, the best results with respect to ethanol yield, final ethanol concentration and productivity were obtained with the native non-adapted Saccharomyses cerevisiae. Some recombinant bacteria and yeasts have shown promising results and are being considered for commercial scale-up. Wheat straw biorefinery could be the near-term solution for clean, efficient and economically-feasible production of bioethanol as well as high value-added products.",
"title": ""
},
{
"docid": "8646bc8ddeadf17e443e5ddcf705e492",
"text": "This paper proposes a model predictive control (MPC) scheme for the interleaved dc-dc boost converter with coupled inductors. The main control objectives are the regulation of the output voltage to its reference value, despite changes in the input voltage and the load, and the equal sharing of the load current by the two circuit inductors. An inner control loop, using MPC, regulates the input current to its reference that is provided by the outer loop, which is based on a load observer. Simulation results are provided to highlight the performance of the proposed control scheme.",
"title": ""
},
{
"docid": "2113655d3467fbdbf7769e36952d2a6f",
"text": "The goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature make an informed choice of metrics challenging. As a result, instead of using existing metrics, new metrics are proposed frequently, and privacy studies are often incomparable. In this survey, we alleviate these problems by structuring the landscape of privacy metrics. To this end, we explain and discuss a selection of over 80 privacy metrics and introduce categorizations based on the aspect of privacy they measure, their required inputs, and the type of data that needs protection. In addition, we present a method on how to choose privacy metrics based on nine questions that help identify the right privacy metrics for a given scenario, and highlight topics where additional work on privacy metrics is needed. Our survey spans multiple privacy domains and can be understood as a general framework for privacy measurement.",
"title": ""
},
{
"docid": "b0901a572ecaaeb1233b92d5653c2f12",
"text": "This qualitative study offers a novel exploration of the links between social media, virtual intergroup contact, and empathy by examining how empathy is expressed through interactions on a popular social media blog. Global leaders are encouraging individuals to engage in behaviors and support policies that provide basic social foundations. It is difficult to motivate people to undertake such actions. However, research shows that empathy intensifies motivation to help others. It can cause individuals to see the world from the perspective of stigmatized group members and increase positive feelings. Social media offers a new pathway for virtual intergroup contact, providing opportunities to increase conversation about disadvantaged others and empathy. We examined expressions of empathy within a popular blog, Humans of New York (HONY), and engaged in purposeful case selection by focusing on (1) events where specific prosocial action was taken corresponding to interactions on the HONY blog and (2) presentation of people in countries other than the United States. Nine overarching themes; (1) perspective taking, (2) fantasy, (3) empathic concern, (4) personal distress, (5) relatability, (6) prosocial action, (7) community appreciation, (8) anti-empathy, and (9) rejection of anti-empathy, exemplify how the HONY community expresses and shares empathic thoughts and feelings.",
"title": ""
},
{
"docid": "976aee37c264dbf53b7b1fbbf0d583c4",
"text": "This paper applies Halliday's (1994) theory of the interpersonal, ideational and textual meta-functions of language to conceptual metaphor. Starting from the observation that metaphoric expressions tend to be organized in chains across texts, the question is raised what functions those expressions serve in different parts of a text as well as in relation to each other. The empirical part of the article consists of the sample analysis of a business magazine text on marketing. This analysis is two-fold, integrating computer-assisted quantitative investigation with qualitative research into the organization and multifunctionality of metaphoric chains as well as the cognitive scenarios evolving from those chains. The paper closes by summarizing the main insights along the lines of the three Hallidayan meta-functions of conceptual metaphor and suggesting functional analysis of metaphor at levels beyond that of text. Im vorliegenden Artikel wird Hallidays (1994) Theorie der interpersonellen, ideellen und textuellen Metafunktion von Sprache auf das Gebiet der konzeptuellen Metapher angewandt. Ausgehend von der Beobachtung, dass metaphorische Ausdrücke oft in textumspannenden Ketten angeordnet sind, wird der Frage nachgegangen, welche Funktionen diese Ausdrücke in verschiedenen Teilen eines Textes und in Bezug aufeinander erfüllen. Der empirische Teil der Arbeit besteht aus der exemplarischen Analyse eines Artikels aus einem Wirtschaftsmagazin zum Thema Marketing. Diese Analysis gliedert sich in zwei Teile und verbindet computergestütze quantitative Forschung mit einer qualitativen Untersuchung der Anordnung und Multifunktionalität von Metaphernketten sowie der kognitiven Szenarien, die aus diesen Ketten entstehen. Der Aufsatz schließt mit einer Zusammenfassung der wesentlichen Ergebnisse im Licht der Hallidayschen Metafunktionen konzeptueller Metaphern und gibt einen Ausblick auf eine funktionale Metaphernanalyse, die über die rein textuelle Ebene hinausgeht.",
"title": ""
},
{
"docid": "9cea5720bdba8af6783d9e9f8bc7b7d1",
"text": "BACKGROUND\nFeasible, cost-effective instruments are required for the surveillance of moderate-to-vigorous physical activity (MVPA) and sedentary behaviour (SB) and to assess the effects of interventions. However, the evidence base for the validity and reliability of the World Health Organisation-endorsed Global Physical Activity Questionnaire (GPAQ) is limited. We aimed to assess the validity of the GPAQ, compared to accelerometer data in measuring and assessing change in MVPA and SB.\n\n\nMETHODS\nParticipants (n = 101) were selected randomly from an on-going research study, stratified by level of physical activity (low, moderate or highly active, based on the GPAQ) and sex. Participants wore an accelerometer (Actigraph GT3X) for seven days and completed a GPAQ on Day 7. This protocol was repeated for a random sub-sample at a second time point, 3-6 months later. Analysis involved Wilcoxon-signed rank tests for differences in measures, Bland-Altman analysis for the agreement between measures for median MVPA and SB mins/day, and Spearman's rho coefficient for criterion validity and extent of change.\n\n\nRESULTS\n95 participants completed baseline measurements (44 females, 51 males; mean age 44 years, (SD 14); measurements of change were calculated for 41 (21 females, 20 males; mean age 46 years, (SD 14). There was moderate agreement between GPAQ and accelerometer for MVPA mins/day (r = 0.48) and poor agreement for SB (r = 0.19). The absolute mean difference (self-report minus accelerometer) for MVPA was -0.8 mins/day and 348.7 mins/day for SB; and negative bias was found to exist, with those people who were more physically active over-reporting their level of MVPA: those who were more sedentary were less likely to under-report their level of SB. Results for agreement in change over time showed moderate correlation (r = 0.52, p = 0.12) for MVPA and poor correlation for SB (r = -0.024, p = 0.916).\n\n\nCONCLUSIONS\nLevels of agreement with objective measurements indicate the GPAQ is a valid measure of MVPA and change in MVPA but is a less valid measure of current levels and change in SB. Thus, GPAQ appears to be an appropriate measure for assessing the effectiveness of interventions to promote MVPA.",
"title": ""
}
] |
scidocsrr
|
7fabdf6063107d656b2ae326017db1fe
|
Interpersonal influences on adolescent materialism : A new look at the role of parents and peers
|
[
{
"docid": "d602cafe18d720f024da1b36c9283ba5",
"text": "Associations between materialism and peer relations are likely to exist in elementary school children but have not been studied previously. The first two studies introduce a new Perceived Peer Group Pressures (PPGP) Scale suitable for this age group, demonstrating that perceived pressure regarding peer culture (norms for behavioral, attitudinal, and material characteristics) can be reliably measured and that it is connected to children's responses to hypothetical peer pressure vignettes. Studies 3 and 4 evaluate the main theoretical model of associations between peer relations and materialism. Study 3 supports the hypothesis that peer rejection is related to higher perceived peer culture pressure, which in turn is associated with greater materialism. Study 4 confirms that the endorsement of social motives for materialism mediates the relationship between perceived peer pressure and materialism.",
"title": ""
}
] |
[
{
"docid": "49c1924821c326f803cefff58ca7ab67",
"text": "Dynamic binary analysis is a prevalent and indispensable technique in program analysis. While several dynamic binary analysis tools and frameworks have been proposed, all suffer from one or more of: prohibitive performance degradation, a semantic gap between the analysis code and the program being analyzed, architecture/OS specificity, being user-mode only, and lacking APIs. We present DECAF, a virtual machine based, multi-target, whole-system dynamic binary analysis framework built on top of QEMU. DECAF provides Just-In-Time Virtual Machine Introspection and a plugin architecture with a simple-to-use event-driven programming interface. DECAF implements a new instruction-level taint tracking engine at bit granularity, which exercises fine control over the QEMU Tiny Code Generator (TCG) intermediate representation to accomplish on-the-fly optimizations while ensuring that the taint propagation is sound and highly precise. We perform a formal analysis of DECAF's taint propagation rules to verify that most instructions introduce neither false positives nor false negatives. We also present three platform-neutral plugins—Instruction Tracer, Keylogger Detector, and API Tracer, to demonstrate the ease of use and effectiveness of DECAF in writing cross-platform and system-wide analysis tools. Implementation of DECAF consists of 9,550 lines of C++ code and 10,270 lines of C code and we evaluate DECAF using CPU2006 SPEC benchmarks and show average overhead of 605 percent for system wide tainting and 12 percent for VMI.",
"title": ""
},
{
"docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24",
"text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.",
"title": ""
},
{
"docid": "3c444d8918a31831c2dc73985d511985",
"text": "This paper presents methods for collecting and analyzing physiological data during real-world driving tasks to determine a driver's relative stress level. Electrocardiogram, electromyogram, skin conductance, and respiration were recorded continuously while drivers followed a set route through open roads in the greater Boston area. Data from 24 drives of at least 50-min duration were collected for analysis. The data were analyzed in two ways. Analysis I used features from 5-min intervals of data during the rest, highway, and city driving conditions to distinguish three levels of driver stress with an accuracy of over 97% across multiple drivers and driving days. Analysis II compared continuous features, calculated at 1-s intervals throughout the entire drive, with a metric of observable stressors created by independent coders from videotapes. The results show that for most drivers studied, skin conductivity and heart rate metrics are most closely correlated with driver stress level. These findings indicate that physiological signals can provide a metric of driver stress in future cars capable of physiological monitoring. Such a metric could be used to help manage noncritical in-vehicle information systems and could also provide a continuous measure of how different road and traffic conditions affect drivers.",
"title": ""
},
{
"docid": "fa6ec1ff4a0849e5a4ec2dda7b20d966",
"text": "Most digital still cameras acquire imagery with a color filter array (CFA), sampling only one color value for each pixel and interpolating the other two color values afterwards. The interpolation process is commonly known as demosaicking. In general, a good demosaicking method should preserve the high-frequency information of imagery as much as possible, since such information is essential for image visual quality. We discuss in this paper two key observations for preserving high-frequency information in CFA demosaicking: (1) the high frequencies are similar across three color components, and 2) the high frequencies along the horizontal and vertical axes are essential for image quality. Our frequency analysis of CFA samples indicates that filtering a CFA image can better preserve high frequencies than filtering each color component separately. This motivates us to design an efficient filter for estimating the luminance at green pixels of the CFA image and devise an adaptive filtering approach to estimating the luminance at red and blue pixels. Experimental results on simulated CFA images, as well as raw CFA data, verify that the proposed method outperforms the existing state-of-the-art methods both visually and in terms of peak signal-to-noise ratio, at a notably lower computational cost.",
"title": ""
},
{
"docid": "eced59d8ec159f3127e7d2aeca76da96",
"text": "Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face to face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches, such as handheld devices with composited graphics or see-through head worn displays, is that users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction. We detail our prototype system and a number of interactive experiences. We present an initial user experiment that shows that participants are able to deduce the size and distance of a virtual projected object. A second experiment shows that participants are able to infer which of a number of targets the other user indicates by pointing.",
"title": ""
},
{
"docid": "c797b2a78ea6eb434159fd948c0a1bf0",
"text": "Feature extraction is an essential part in applications that require computer vision to recognize objects in an image processed. To extract the features robustly, feature extraction algorithms are often very demanding in computation so that the performance achieved by pure software is far from real-time. Among those feature extraction algorithms, scale-invariant feature transform (SIFT) has gained a lot of popularity recently. In this paper, we propose an all-hardware SIFT accelerator-the fastest of its kind to our knowledge. It consists of two interactive hardware components, one for key point identification, and the other for feature descriptor generation. We successfully developed a segment buffer scheme that could not only feed data to the computing modules in a data-streaming manner, but also reduce about 50% memory requirement than a previous work. With a parallel architecture incorporating a three-stage pipeline, the processing time of the key point identification is only 3.4 ms for one video graphics array (VGA) image. Taking also into account the feature descriptor generation part, the overall SIFT processing time for a VGA image can be kept within 33 ms (to support real-time operation) when the number of feature points to be extracted is fewer than 890.",
"title": ""
},
{
"docid": "d43dc521d3f0f17ccd4840d6081dcbfe",
"text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.",
"title": ""
},
{
"docid": "6a04e07937d1c5beef84acb0a4e0e328",
"text": "Linear hashing and spiral storage are two dynamic hashing schemes originally designed for external files. This paper shows how to adapt these two methods for hash tables stored in main memory. The necessary data structures and algorithms are described, the expected performance is analyzed mathematically, and actual execution times are obtained and compared with alternative techniques. Linear hashing is found to be both faster and easier to implement than spiral storage. Two alternative techniques are considered: a simple unbalanced binary tree and double hashing with periodic rehashing into a larger table. The retrieval time of linear hashing is similar to double hashing and substantially faster than a binary tree, except for very small trees. The loading times of double hashing (with periodic reorganization), a binary tree, and linear hashing are similar. Overall, linear hashing is a simple and efficient technique for applications where the cardinality of the key set is not known in advance.",
"title": ""
},
{
"docid": "6c4433b640cf1d7557b2e74cbd2eee85",
"text": "A compact Ka-band broadband waveguide-based travelingwave spatial power combiner is presented. The low loss micro-strip probes are symmetrically inserted into both broadwalls of waveguide, quadrupling the coupling ways but the insertion loss increases little. The measured 16 dB return-loss bandwidth of the eight-way back-toback structure is from 30 GHz to 39.4 GHz (more than 25%) and the insertion loss is less than 1 dB, which predicts the power-combining efficiency is higher than 90%.",
"title": ""
},
{
"docid": "89349e8f3e7d8df8bb8ab6f55404a91f",
"text": "Due to the high intake of sugars, especially sucrose, global trends in food processing have encouraged producers to use sweeteners, particularly synthetic ones, to a wide extent. For several years, increasing attention has been paid in the literature to the stevia (Stevia rebauidana), containing glycosidic diterpenes, for which sweetening properties have been identified. Chemical composition, nutritional value and application of stevia leaves are briefl y summarized and presented.",
"title": ""
},
{
"docid": "31873424960073962d3d8eba151f6a4b",
"text": "Multiple view data, which have multiple representations from different feature spaces or graph spaces, arise in various data mining applications such as information retrieval, bioinformatics and social network analysis. Since different representations could have very different statistical properties, how to learn a consensus pattern from multiple representations is a challenging problem. In this paper, we propose a general model for multiple view unsupervised learning. The proposed model introduces the concept of mapping function to make the different patterns from different pattern spaces comparable and hence an optimal pattern can be learned from the multiple patterns of multiple representations. Under this model, we formulate two specific models for two important cases of unsupervised learning, clustering and spectral dimensionality reduction; we derive an iterating algorithm for multiple view clustering, and a simple algorithm providing a global optimum to multiple spectral dimensionality reduction. We also extend the proposed model and algorithms to evolutionary clustering and unsupervised learning with side information. Empirical evaluations on both synthetic and real data sets demonstrate the effectiveness of the proposed model and algorithms.",
"title": ""
},
{
"docid": "3323474060ba5f1fbbbdcb152c22a6a9",
"text": "A compact triple-band microstrip slot antenna applied to WLAN/WiMAX applications is proposed in this letter. This antenna has a simpler structure than other antennas designed for realizing triple-band characteristics. It is just composed of a microstrip feed line, a substrate, and a ground plane on which some simple slots are etched. Then, to prove the validation of the design, a prototype is fabricated and measured. The experimental data show that the antenna can provide three impedance bandwidths of 600 MHz centered at 2.7 GHz, 430 MHz centered at 3.5 GHz, and 1300 MHz centered at 5.6 GHz.",
"title": ""
},
{
"docid": "713010fe0ee95840e6001410f8a164cc",
"text": "Three studies tested the idea that when social identity is salient, group-based appraisals elicit specific emotions and action tendencies toward out-groups. Participants' group memberships were made salient and the collective support apparently enjoyed by the in-group was measured or manipulated. The authors then measured anger and fear (Studies 1 and 2) and anger and contempt (Study 3), as well as the desire to move against or away from the out-group. Intergroup anger was distinct from intergroup fear, and the inclination to act against the out-group was distinct from the tendency to move away from it. Participants who perceived the in-group as strong were more likely to experience anger toward the out-group and to desire to take action against it. The effects of perceived in-group strength on offensive action tendencies were mediated by anger.",
"title": ""
},
{
"docid": "a7e8c3a64f6ba977e142de9b3dae7e57",
"text": "Craniofacial superimposition is a process that aims to identify a person by overlaying a photograph and a model of the skull. This process is usually carried out manually by forensic anthropologists; thus being very time consuming and presenting several difficulties in finding a good fit between the 3D model of the skull and the 2D photo of the face. In this paper we present a fast and automatic procedure to tackle the superimposition problem. The proposed method is based on real-coded genetic algorithms. Synthetic data are used to validate the method. Results on a real case from our Physical Anthropology lab of the University of Granada are also presented.",
"title": ""
},
{
"docid": "77cfc86c63ca0a7b3ed3b805ea16b9c9",
"text": "The research presented in this paper is about detecting collaborative networks inside the structure of a research social network. As case study we consider ResearchGate and SEE University academic staff. First we describe the methodology used to crawl and create an academic-academic network depending from their fields of interest. We then calculate and discuss four social network analysis centrality measures (closeness, betweenness, degree, and PageRank) for entities in this network. In addition to these metrics, we have also investigated grouping of individuals, based on automatic clustering depending from their reciprocal relationships.",
"title": ""
},
{
"docid": "7354d8c1e8253a99cfd62d8f96e57a77",
"text": "In the past few decades, clustering has been widely used in areas such as pattern recognition, data analysis, and image processing. Recently, clustering has been recognized as a primary data mining method for knowledge discovery in spatial databases, i.e. databases managing 2D or 3D points, polygons etc. or points in some d-dimensional feature space. The well-known clustering algorithms, however, have some drawbacks when applied to large spatial databases. First, they assume that all objects to be clustered reside in main memory. Second, these methods are too inefficient when applied to large databases. To overcome these limitations, new algorithms have been developed which are surveyed in this paper. These algorithms make use of efficient query processing techniques provided by spatial database systems.",
"title": ""
},
{
"docid": "23493c14053a4608203f8e77bd899445",
"text": "In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram (EEG) signals are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of “lossy plus residual coding,” consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. The compression algorithms are applied to three different EEG datasets, each with different sampling rate and resolution. The proposed multichannel compression algorithms achieve attractive compression ratios compared to algorithms that compress individual channels separately.",
"title": ""
},
{
"docid": "3a3c0c21d94c2469bd95a103a9984354",
"text": "Recently it was shown that the problem of Maximum Inner Product Search (MIPS) is efficient and it admits provably sub-linear hashing algorithms. Asymmetric transformations before hashing were the key in solving MIPS which was otherwise hard. In [18], the authors use asymmetric transformations which convert the problem of approximate MIPS into the problem of approximate near neighbor search which can be efficiently solved using hashing. In this work, we provide a different transformation which converts the problem of approximate MIPS into the problem of approximate cosine similarity search which can be efficiently solved using signed random projections. Theoretical analysis show that the new scheme is significantly better than the original scheme for MIPS. Experimental evaluations strongly support the theoretical findings.",
"title": ""
}
] |
scidocsrr
|
7fc4b30a0ea6873fc03082ded61a82ed
|
A vision of industry 4 . 0 from an artificial intelligence point of view
|
[
{
"docid": "22fd1487e69420597c587e03f2b48f65",
"text": "Design and operation of a manufacturing enterprise involve numerous types of decision-making at various levels and domains. A complex system has a large number of design variables and decision-making requires real-time data collected from machines, processes, and business environments. Enterprise systems (ESs) are used to support data acquisition, communication, and all decision-making activities. Therefore, information technology (IT) infrastructure for data acquisition and sharing affects the performance of an ES greatly. Our objective is to investigate the impact of emerging Internet of Things (IoT) on ESs in modern manufacturing. To achieve this objective, the evolution of manufacturing system paradigms is discussed to identify the requirements of decision support systems in dynamic and distributed environments; recent advances in IT are overviewed and associated with next-generation manufacturing paradigms; and the relation of IT infrastructure and ESs is explored to identify the technological gaps in adopting IoT as an IT infrastructure of ESs. The future research directions in this area are discussed.",
"title": ""
},
{
"docid": "eead063c20e32f53ec8a5e81dbac951c",
"text": "We are currently experiencing the fourth Industrial Revolution in terms of cyber physical systems. These systems are industrial automation systems that enable many innovative functionalities through their networking and their access to the cyber world, thus changing our everyday lives significantly. In this context, new business models, work processes and development methods that are currently unimaginable will arise. These changes will also strongly influence the society and people. Family life, globalization, markets, etc. will have to be redefined. However, the Industry 4.0 simultaneously shows characteristics that represent the challenges regarding the development of cyber-physical systems, reliability, security and data protection. Following a brief introduction to Industry 4.0, this paper presents a prototypical application that demonstrates the essential aspects.",
"title": ""
}
] |
[
{
"docid": "623cdf022d333ca4d6b244f54d301650",
"text": "Alveolar rhabdomyosarcoma (ARMS) are aggressive soft tissue tumors harboring specific fusion transcripts, notably PAX3-FOXO1 (P3F). Current therapy concepts result in unsatisfactory survival rates making the search for innovative approaches necessary: targeting PAX3-FOXO1 could be a promising strategy. In this study, we developed integrin receptor-targeted Lipid-Protamine-siRNA (LPR) nanoparticles using the RGD peptide and validated target specificity as well as their post-silencing effects. We demonstrate that RGD-LPRs are specific to ARMS in vitro and in vivo. Loaded with siRNA directed against the breakpoint of P3F, these particles efficiently down regulated the fusion transcript and inhibited cell proliferation, but did not induce substantial apoptosis. In a xenograft ARMS model, LPR nanoparticles targeting P3F showed statistically significant tumor growth delay as well as inhibition of tumor initiation when injected in parallel with the tumor cells. These findings suggest that RGD-LPR targeting P3F are promising to be highly effective in the setting of minimal residual disease for ARMS.",
"title": ""
},
{
"docid": "d56fb6c80cc0d48602b48f506b0601a6",
"text": "In application domains such as healthcare, we want accurate predictive models that are also causally interpretable. In pursuit of such models, we propose a causal regularizer to steer predictive models towards causally-interpretable solutions and theoretically study its properties. In a large-scale analysis of Electronic Health Records (EHR), our causally-regularized model outperforms its L1-regularized counterpart in causal accuracy and is competitive in predictive performance. We perform non-linear causality analysis by causally regularizing a special neural network architecture. We also show that the proposed causal regularizer can be used together with neural representation learning algorithms to yield up to 20% improvement over multilayer perceptron in detecting multivariate causation, a situation common in healthcare, where many causal factors should occur simultaneously to have an effect on the target variable.",
"title": ""
},
{
"docid": "6ee26f725bfb63a6ff72069e48404e68",
"text": "OBJECTIVE\nTo determine which routinely collected exercise test variables most strongly correlate with survival and to derive a fitness risk score that can be used to predict 10-year survival.\n\n\nPATIENTS AND METHODS\nThis was a retrospective cohort study of 58,020 adults aged 18 to 96 years who were free of established heart disease and were referred for an exercise stress test from January 1, 1991, through May 31, 2009. Demographic, clinical, exercise, and mortality data were collected on all patients as part of the Henry Ford ExercIse Testing (FIT) Project. Cox proportional hazards models were used to identify exercise test variables most predictive of survival. A \"FIT Treadmill Score\" was then derived from the β coefficients of the model with the highest survival discrimination.\n\n\nRESULTS\nThe median age of the 58,020 participants was 53 years (interquartile range, 45-62 years), and 28,201 (49%) were female. Over a median of 10 years (interquartile range, 8-14 years), 6456 patients (11%) died. After age and sex, peak metabolic equivalents of task and percentage of maximum predicted heart rate achieved were most highly predictive of survival (P<.001). Subsequent addition of baseline blood pressure and heart rate, change in vital signs, double product, and risk factor data did not further improve survival discrimination. The FIT Treadmill Score, calculated as [percentage of maximum predicted heart rate + 12(metabolic equivalents of task) - 4(age) + 43 if female], ranged from -200 to 200 across the cohort, was near normally distributed, and was found to be highly predictive of 10-year survival (Harrell C statistic, 0.811).\n\n\nCONCLUSION\nThe FIT Treadmill Score is easily attainable from any standard exercise test and translates basic treadmill performance measures into a fitness-related mortality risk score. The FIT Treadmill Score should be validated in external populations.",
"title": ""
},
{
"docid": "0123fd04bc65b8dfca7ff5c058d087da",
"text": "The authors forward the hypothesis that social exclusion is experienced as painful because reactions to rejection are mediated by aspects of the physical pain system. The authors begin by presenting the theory that overlap between social and physical pain was an evolutionary development to aid social animals in responding to threats to inclusion. The authors then review evidence showing that humans demonstrate convergence between the 2 types of pain in thought, emotion, and behavior, and demonstrate, primarily through nonhuman animal research, that social and physical pain share common physiological mechanisms. Finally, the authors explore the implications of social pain theory for rejection-elicited aggression and physical pain disorders.",
"title": ""
},
{
"docid": "9592fc0ec54a5216562478414dc68eb4",
"text": "We consider the problem of finding the best arm in a stochastic multi-armed bandit game. The regret of a forecaster is here defined by the gap between the mean reward of the optimal arm and the mean reward of the ultimately chosen arm. We propose a highly exploring UCB policy and a new algorithm based on successive rejects. We show that these algorithms are essentially optimal since their regret decreases exponentially at a rate which is, up to a logarithmic factor, the best possible. However, while the UCB policy needs the tuning of a parameter depending on the unobservable hardness of the task, the successive rejects policy benefits from being parameter-free, and also independent of the scaling of the rewards. As a by-product of our analysis, we show that identifying the best arm (when it is unique) requires a number of samples of order (up to a log(K) factor) ∑ i 1/∆ 2 i , where the sum is on the suboptimal arms and ∆i represents the difference between the mean reward of the best arm and the one of arm i. This generalizes the well-known fact that one needs of order of 1/∆ samples to differentiate the means of two distributions with gap ∆.",
"title": ""
},
{
"docid": "cc9de768281e58749cd073d25a97d39c",
"text": "The Dynamic Adaptive Streaming over HTTP (referred as MPEG DASH) standard is designed to provide high quality of media content over the Internet delivered from conventional HTTP web servers. The visual content, divided into a sequence of segments, is made available at a number of different bitrates so that an MPEG DASH client can automatically select the next segment to download and play back based on current network conditions. The task of transcoding media content to different qualities and bitrates is computationally expensive, especially in the context of large-scale video hosting systems. Therefore, it is preferably executed in a powerful cloud environment, rather than on the source computer (which may be a mobile device with limited memory, CPU speed and battery life). In order to support the live distribution of media events and to provide a satisfactory user experience, the overall processing delay of videos should be kept to a minimum. In this paper, we propose a novel dynamic scheduling methodology on video transcoding for MPEG DASH in a cloud environment, which can be adapted to different applications. The designed scheduler monitors the workload on each processor in the cloud environment and selects the fastest processors to run high-priority jobs. It also adjusts the video transcoding mode (VTM) according to the system load. Experimental results show that the proposed scheduler performs well in terms of the video completion time, system load balance, and video playback smoothness.",
"title": ""
},
{
"docid": "af22932b48a2ea64ecf3e5ba1482564d",
"text": "Collaborative embedded systems (CES) heavily rely on information models to understand the contextual situations they are exposed to. These information models serve different purposes. First, during development time it is necessary to model the context for eliciting and documenting the requirements that a CES is supposed to achieve. Second, information models provide information to simulate different contextual situations and CES ́s behavior in these situations. Finally, CESs need information models about their context during runtime in order to react to different contextual situations and exchange context information with other CESs. Heavyweight ontologies, based on Ontology Web Language (OWL), have already proven suitable for representing knowledge about contextual situations during runtime. Furthermore, lightweight ontologies (e.g. class diagrams) have proven their practicality for creating domain specific languages for requirements documentation. However, building an ontology (lightor heavyweight) is a non-trivial task that needs to be integrated into development methods for CESs such that it serves the above stated purposes in a seamless way. This paper introduces the requirements for the building of ontologies and proposes a method that is integrated into the engineering of CESs.",
"title": ""
},
{
"docid": "20ef5a8b6835bedd44d571952b46ca90",
"text": "This paper proposes an XYZ-flexure parallel mechanism (FPM) with large displacement and decoupled kinematics structure. The large-displacement FPM has large motion range more than 1 mm. Moreover, the decoupled XYZ-stage has small cross-axis error and small parasitic rotation. In this study, the typical prismatic joints are investigated, and a new large-displacement prismatic joint using notch hinges is designed. The conceptual design of the FPM is proposed by assembling these modular prismatic joints, and then the optimal design of the FPM is conducted. The analytical models of linear stiffness and dynamics are derived using pseudo-rigid-body (PRB) method. Finally, the numerical simulation using ANSYS is conducted for modal analysis to verify the analytical dynamics equation. Experiments are conducted to verify the proposed design for linear stiffness, cross-axis error and parasitic rotation",
"title": ""
},
{
"docid": "e21aed852a892cbede0a31ad84d50a65",
"text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.09.010 ⇑ Corresponding author. Tel.: +1 662 915 5519. E-mail addresses: [email protected] (C. R (D. Gamboa), [email protected] (F. Glover), [email protected] (C. Osterman). Heuristics for the traveling salesman problem (TSP) have made remarkable advances in recent years. We survey the leading methods and the special components responsible for their successful implementations, together with an experimental analysis of computational tests on a challenging and diverse set of symmetric and asymmetric TSP benchmark problems. The foremost algorithms are represented by two families, deriving from the Lin–Kernighan (LK) method and the stem-and-cycle (S&C) method. We show how these families can be conveniently viewed within a common ejection chain framework which sheds light on their similarities and differences, and gives clues about the nature of potential enhancements to today’s best methods that may provide additional gains in solving large and difficult TSPs. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "612423df25809938ada93f24be7d2ac5",
"text": "Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input–output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.",
"title": ""
},
{
"docid": "4c74b49b01e550cee8b49cbf3d142c15",
"text": "Neural embeddings are a popular set of methods for representing words, phrases or text as a low dimensional vector (typically 50-500 dimensions). However, it is difficult to interpret these dimensions in a meaningful manner, and creating neural embeddings requires extensive training and tuning of multiple parameters and hyperparameters. We present here a simple unsupervised method for representing words, phrases or text as a low dimensional vector, in which the meaning and relative importance of dimensions is transparent to inspection. We have created a near-comprehensive vector representation of words, and selected bigrams, trigrams and abbreviations, using the set of titles and abstracts in PubMed as a corpus. This vector is used to create several novel implicit word-word and text-text similarity metrics. The implicit word-word similarity metrics correlate well with human judgement of word pair similarity and relatedness, and outperform or equal all other reported methods on a variety of biomedical benchmarks, including several implementations of neural embeddings trained on PubMed corpora. Our implicit word-word metrics capture different aspects of word-word relatedness than word2vecbased metrics and are only partially correlated (rho = ~0.5-0.8 depending on task and corpus). The vector representations of words, bigrams, trigrams, abbreviations, and PubMed title+abstracts are all publicly available from http://arrowsmith.psych.uic.edu for release under CC-BY-NC license. Several public web query interfaces are also available at the same site, including one which allows the user to specify a given word and view its most closely related terms according to direct co-occurrence as well as different implicit similarity metrics.",
"title": ""
},
{
"docid": "7070a2d1e1c098950996d794c372cbc7",
"text": "Selecting the right audience for an advertising campaign is one of the most challenging, time-consuming and costly steps in the advertising process. To target the right audience, advertisers usually have two options: a) market research to identify user segments of interest and b) sophisticated machine learning models trained on data from past campaigns. In this paper we study how demand-side platforms (DSPs) can leverage the data they collect (demographic and behavioral) in order to learn reputation signals about end user convertibility and advertisement (ad) quality. In particular, we propose a reputation system which learns interest scores about end users, as an additional signal of ad conversion, and quality scores about ads, as a signal of campaign success. Then our model builds user segments based on a combination of demographic, behavioral and the new reputation signals and recommends transparent targeting rules that are easy for the advertiser to interpret and refine. We perform an experimental evaluation on industry data that showcases the benefits of our approach for both new and existing advertiser campaigns.",
"title": ""
},
{
"docid": "c4c482cc453884d0016c442b580e3424",
"text": "PURPOSE/OBJECTIVES\nTo better understand treatment-induced changes in sexuality from the patient perspective, to learn how women manage these changes in sexuality, and to identify what information they want from nurses about this symptom.\n\n\nRESEARCH APPROACH\nQualitative descriptive methods.\n\n\nSETTING\nAn outpatient gynecologic clinic in an urban area in the southeastern United States served as the recruitment site for patients.\n\n\nPARTICIPANTS\nEight women, ages 33-69, receiving first-line treatment for ovarian cancer participated in individual interviews. Five women, ages 40-75, participated in a focus group and their status ranged from newly diagnosed to terminally ill from ovarian cancer.\n\n\nMETHODOLOGIC APPROACH\nBoth individual interviews and a focus group were conducted. Content analysis was used to identify themes that described the experience of women as they became aware of changes in their sexuality. Triangulation of approach, the researchers, and theory allowed for a rich description of the symptom experience.\n\n\nFINDINGS\nRegardless of age, women reported that ovarian cancer treatment had a detrimental impact on their sexuality and that the changes made them feel \"no longer whole.\" Mechanical changes caused by surgery coupled with hormonal changes added to the intensity and dimension of the symptom experience. Physiologic, psychological, and social factors also impacted how this symptom was experienced.\n\n\nCONCLUSIONS\nRegardless of age or relationship status, sexuality is altered by the diagnosis and treatment of ovarian cancer.\n\n\nINTERPRETATION\nNurses have an obligation to educate women with ovarian cancer about anticipated changes in their sexuality that may come from treatment.",
"title": ""
},
{
"docid": "e6088779901bd4bfaf37a3a1784c3854",
"text": "There has been recently a great progress in the field of automatically generated knowledge bases and corresponding disambiguation systems that are capable of mapping text mentions onto canonical entities. Efforts like the before mentioned have enabled researchers and analysts from various disciplines to semantically “understand” contents. However, most of the approaches have been specifically designed for the English language and in particular support for Arabic is still in its infancy. Since the amount of Arabic Web contents (e.g. in social media) has been increasing dramatically over the last years, we see a great potential for endeavors that support an entity-level analytics of these data. To this end, we have developed a framework called AIDArabic that extends the existing AIDA system by additional components that allow the disambiguation of Arabic texts based on an automatically generated knowledge base distilled from Wikipedia. Even further, we overcome the still existing sparsity of the Arabic Wikipedia by exploiting the interwiki links between Arabic and English contents in Wikipedia, thus, enriching the entity catalog as well as disambiguation context.",
"title": ""
},
{
"docid": "9d9086fbdfa46ded883b14152df7f5a5",
"text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.",
"title": ""
},
{
"docid": "30178d1de9d0aab8c3ab0ac9be674d8c",
"text": "The immune system protects from infections primarily by detecting and eliminating the invading pathogens; however, the host organism can also protect itself from infectious diseases by reducing the negative impact of infections on host fitness. This ability to tolerate a pathogen's presence is a distinct host defense strategy, which has been largely overlooked in animal and human studies. Introduction of the notion of \"disease tolerance\" into the conceptual tool kit of immunology will expand our understanding of infectious diseases and host pathogen interactions. Analysis of disease tolerance mechanisms should provide new approaches for the treatment of infections and other diseases.",
"title": ""
},
{
"docid": "171fd68f380f445723b024f290a02d69",
"text": "Cytokines, produced at the site of entry of a pathogen, drive inflammatory signals that regulate the capacity of resident and newly arrived phagocytes to destroy the invading pathogen. They also regulate antigen presenting cells (APCs), and their migration to lymph nodes to initiate the adaptive immune response. When naive CD4+ T cells recognize a foreign antigen-derived peptide presented in the context of major histocompatibility complex class II on APCs, they undergo massive proliferation and differentiation into at least four different T-helper (Th) cell subsets (Th1, Th2, Th17, and induced T-regulatory (iTreg) cells in mammals. Each cell subset expresses a unique set of signature cytokines. The profile and magnitude of cytokines produced in response to invasion of a foreign organism or to other danger signals by activated CD4+ T cells themselves, and/or other cell types during the course of differentiation, define to a large extent whether subsequent immune responses will have beneficial or detrimental effects to the host. The major players of the cytokine network of adaptive immunity in fish are described in this review with a focus on the salmonid cytokine network. We highlight the molecular, and increasing cellular, evidence for the existence of T-helper cells in fish. Whether these cells will match exactly to the mammalian paradigm remains to be seen, but the early evidence suggests that there will be many similarities to known subsets. Alternative or additional Th populations may also exist in fish, perhaps influenced by the types of pathogen encountered by a particular species and/or fish group. These Th cells are crucial for eliciting disease resistance post-vaccination, and hopefully will help resolve some of the difficulties in producing efficacious vaccines to certain fish diseases.",
"title": ""
},
{
"docid": "49445cfa92b95045d23a54eca9f9a592",
"text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract In this competitive world, business is becoming highly saturated. Especially, the field of telecommunication faces complex challenges due to a number of vibrant competitive service providers. Therefore, it has become very difficult for them to retain existing customers. Since the cost of acquiring new customers is much higher than the cost of retaining the existing customers, it is the time for the telecom industries to take necessary steps to retain the customers to stabilize their market value. In the past decade, several data mining techniques have been proposed in the literature for predicting the churners using heterogeneous customer records. This paper reviews the different categories of customer data available in open datasets, predictive models and performance metrics used in the literature for churn prediction in telecom industry.",
"title": ""
},
{
"docid": "fddf6e71af23aba468989d6d09da989c",
"text": "The rapidly increasing pervasiveness and integration of computers in human society calls for a broad discipline under which this development can be studied. We argue that to design and use technology one needs to develop and use models of humans and machines in all their aspects, including cognitive and memory models, but also social influence and (artificial) emotions. We call this wider discipline Behavioural Computer Science (BCS), and argue in this paper for why BCS models should unify (models of) the behaviour of humans and machines when designing information and communication technology systems. Thus, one main point to be addressed is the incorporation of empirical evidence for actual human behaviour, instead of making inferences about behaviour based on the rational agent model. Empirical studies can be one effective way to constantly update the behavioural models. We are motivated by the future advancements in artificial intelligence which will give machines capabilities that from many perspectives will be indistinguishable from those of humans. Such machine behaviour would be studied using BCS models, looking at questions about machine trust like “Can a self driving car trust its passengers?”, or artificial influence like “Can the user interface adapt to the user’s behaviour, and thus influence this behaviour?”. We provide a few directions for approaching BCS, focusing on modelling of human and machine behaviour, as well as their interaction.",
"title": ""
}
] |
scidocsrr
|
276ce39f90cdd8bc86d4434f5451e320
|
An overview of end-to-end language understanding and dialog management for personal digital assistants
|
[
{
"docid": "a48278ee8a21a33ff87b66248c6b0b8a",
"text": "We describe a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed architecture is based on recurrent convolutional neural networks (RCNN) with shared feature layers and globally normalized sequence modeling components. The temporal dependencies within and across different tasks are encoded succinctly as recurrent connections. The dialog system responses beyond SLU component are also exploited as effective external features. We show with extensive experiments on a number of datasets that the proposed joint learning framework generates state-of-the-art results for both classification and tagging, and the contextual modeling based on recurrent and external features significantly improves the context sensitivity of SLU models.",
"title": ""
},
{
"docid": "6f768934f02c0e559801a7b98d0fbbd7",
"text": "Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is challenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This paper is the first attempt to solve this challenge. We develop consistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and intent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants ac-cording to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Results show our approach can accurately identify satisfactory and unsatisfactory sessions.",
"title": ""
},
{
"docid": "a64bcfefdebc43809636d6d39887f6e2",
"text": "This paper investigates the use of deep belief networks (DBN) for semantic tagging, a sequence classification task, in spoken language understanding (SLU). We evaluate the performance of the DBN based sequence tagger on the well-studied ATIS task and compare our technique to conditional random fields (CRF), a state-of-the-art classifier for sequence classification. In conjunction with lexical and named entity features, we also use dependency parser based syntactic features and part of speech (POS) tags [1]. Under both noisy conditions (output of automatic speech recognition system) and clean conditions (manual transcriptions), our deep belief network based sequence tagger outperforms the best CRF based system described in [1] by an absolute 2% and 1% F-measure, respectively.Upon carrying out an analysis of cases where CRF and DBN models made different predictions, we observed that when discrete features are projected onto a continuous space during neural network training, the model learns to cluster these features leading to its improved generalization capability, relative to a CRF model, especially in cases where some features are either missing or noisy.",
"title": ""
},
{
"docid": "fbda5771eb59ef5abf6810b47412452d",
"text": "We demonstrate the Task Completion Platform (TCP); a multi-domain dialogue platform that can host and execute large numbers of goal-orientated dialogue tasks. The platform features a task configuration language, TaskForm, that allows the definition of each individual task to be decoupled from the overarching dialogue policy used by the platform to complete those tasks. This separation allows for simple and rapid authoring of new tasks, while dialogue policy and platform functionality evolve independent of the tasks. The current platform includes machine learnt models that provide contextual slot carry-over, flexible item selection, and task selection/switching. Any new task immediately gains the benefit of these pieces of built-in platform functionality. The platform is used to power many of the multi-turn dialogues supported by the Cortana personal assistant.",
"title": ""
}
] |
[
{
"docid": "48c0cf44910459d16a45b31d25855b70",
"text": "In this paper, a beam-steering antenna array that employs a new type of reconfigurable phase shifter is presented. The phase shifter consists of a number of cascaded reconfigurable defected microstrip structure (DMS) units. Each DMS unit is made by etching a slot in a microstrip line and loading the slot with PIN diodes. The “on” and “off” states of the PIN diodes in the DMS unit provide the phase shifts by changing the current paths. Analyses on the performance of various phase shifters cascading different numbers of DMS units are conducted by both simulations and experiments. Using the proposed phase-shifter units and Wilkinson power dividers, a four-element beam-steering antenna array was designed, fabricated, and tested. Experimental results agree well with the simulated ones. The proposed antenna array employing DMS-based phase shifters offers a low-cost solution to beamforming in wireless communications.",
"title": ""
},
{
"docid": "2e0a8498613410fd5827e30b7b4daade",
"text": "Linux containers in the commercial world are changing the landscape for application development and deployments. Container technologies are also making inroads into HPC environments, as exemplified by NERSC’s Shifter and LBL’s Singularity. While the first generation of HPC containers offers some of the same benefits as the existing open container frameworks, like CoreOS or Docker, they do not address the cloud/commercial feature sets such as virtualized networks, full isolation, and orchestration. This paper will explore the use of containers in the HPC environment and summarize our study to determine how best to use these technologies in the HPC environment at scale. KeywordsShifter; HPC; Cray; container; virtualization; Docker; CoreOS",
"title": ""
},
{
"docid": "e8d2bad4083a4a6cf5f96aedd5112f3f",
"text": "Mechanic's hands is a poorly defined clinical finding that has been reported in a variety of rheumatologic diseases. Morphologic descriptions include hyperkeratosis on the sides of the digits that sometimes extends to the distal tips, diffuse palmar scale, and (more recently observed) linear discrete scaly papules in a similar lateral distribution. The association of mechanic's hands with dermatomyositis, although recognized, is still debatable. In this review, most studies have shown that mechanic's hands is commonly associated with dermatomyositis and displays histopathologic findings of interface dermatitis, colloid bodies, and interstitial mucin, which are consistent with a cutaneous connective tissue disease. A more specific definition of this entity would help to determine its usefulness in classifying and clinically identifying patients with dermatomyositis, with implications related to subsequent screening for associated comorbidities in this setting.",
"title": ""
},
{
"docid": "b9c253196a1cac6109e814e5d9a7cd97",
"text": "In this digital age, most business is conducted electronically. This contemporary paradigm creates openings for potentially harmful unanticipated information security incidents of both a criminal or civil nature, with the potential to cause considerable direct and indirect damage to smaller businesses. Electronic evidence is fundamental to the successful handling of such incidents. If an organisation does not prepare proactively for such incidents it is highly likely that important relevant digital evidence will not be available. Not being able to respond effectively could be extremely damaging to smaller companies, as they are unable to absorb losses as easily as larger organisations. In order to prepare smaller businesses for incidents of this nature, the implementation of Digital Forensic Readiness policies and procedures is necessitated. Numerous varying factors such as the perceived high cost, as well as the current lack of forensic skills, make the implementation of Digital Forensic Readiness appear difficult if not infeasible for smaller organisations. In order to solve this problem it is necessary to develop a scalable and flexible framework for the implementation of Digital Forensic Readiness based on the individual risk profile of a small to medium enterprise (SME). This paper aims to determine, from literature, the concepts of Digital Forensic Readiness and how they apply to SMEs. Based on the findings, the aspects of Digital Forensics and organisational characteristics that should be included in such a framework is highlighted.",
"title": ""
},
{
"docid": "8107b3dc36d240921571edfc778107ff",
"text": "FinFET devices have been proposed as a promising substitute for conventional bulk CMOS-based devices at the nanoscale due to their extraordinary properties such as improved channel controllability, a high on/off current ratio, reduced short-channel effects, and relative immunity to gate line-edge roughness. This brief builds standard cell libraries for the advanced 7-nm FinFET technology, supporting multiple threshold voltages and supply voltages. The circuit synthesis results of various combinational and sequential circuits based on the presented 7-nm FinFET standard cell libraries forecast 10× and 1000× energy reductions on average in a superthreshold regime and 16× and 3000× energy reductions on average in a near-threshold regime as compared with the results of the 14-nm and 45-nm bulk CMOS technology nodes, respectively.",
"title": ""
},
{
"docid": "26095dbc82b68c32881ad9316256bc42",
"text": "BACKGROUND\nSchizophrenia causes great suffering for patients and families. Today, patients are treated with medications, but unfortunately many still have persistent symptoms and an impaired quality of life. During the last 20 years of research in cognitive behavioral therapy (CBT) for schizophrenia, evidence has been found that the treatment is good for patients but it is not satisfactory enough, and more studies are being carried out hopefully to achieve further improvement.\n\n\nPURPOSE\nClinical trials and meta-analyses are being used to try to prove the efficacy of CBT. In this article, we summarize recent research using the cognitive model for people with schizophrenia.\n\n\nMETHODS\nA systematic search was carried out in PubMed (Medline). Relevant articles were selected if they contained a description of cognitive models for schizophrenia or psychotic disorders.\n\n\nRESULTS\nThere is now evidence that positive and negative symptoms exist in a continuum, from normality (mild form and few symptoms) to fully developed disease (intensive form with many symptoms). Delusional patients have reasoning bias such as jumping to conclusions, and those with hallucination have impaired self-monitoring and experience their own thoughts as voices. Patients with negative symptoms have negative beliefs such as low expectations regarding pleasure and success. In the entire patient group, it is common to have low self-esteem.\n\n\nCONCLUSIONS\nThe cognitive model integrates very well with the aberrant salience model. It takes into account neurobiology, cognitive, emotional and social processes. The therapist uses this knowledge when he or she chooses techniques for treatment of patients.",
"title": ""
},
{
"docid": "bce7787c5d56985006231471b57926c8",
"text": "Isoquercitrin is a rare, natural ingredient with several biological activities that is a key precursor for the synthesis of enzymatically modified isoquercitrin (EMIQ). The enzymatic production of isoquercitrin from rutin catalyzed by hesperidinase is feasible; however, the bioprocess is hindered by low substrate concentration and a long reaction time. Thus, a novel biphase system consisting of [Bmim][BF4]:glycine-sodium hydroxide (pH 9) (10:90, v/v) and glyceryl triacetate (1:1, v/v) was initially established for isoquercitrin production. The biotransformation product was identified using liquid chromatography-mass spectrometry, and the bonding mechanism of the enzyme and substrate was inferred using circular dichroism spectra and kinetic parameters. The highest rutin conversion of 99.5% and isoquercitrin yield of 93.9% were obtained after 3 h. The reaction route is environmentally benign and mild, and the biphase system could be reused. The substrate concentration was increased 2.6-fold, the reaction time was reduced to three tenths the original time. The three-dimensional structure of hesperidinase was changed in the biphase system, which α-helix and random content were reduced and β-sheet content was increased. Thus, the developed biphase system can effectively strengthen the hesperidinase-catalyzed synthesis of isoquercitrin with high yield.",
"title": ""
},
{
"docid": "fa34e68369a138cbaaf9ad085803e504",
"text": "This paper proposes an optimal rotor design method of an interior permanent magnet synchronous motor (IPMSM) by using a permanent magnet (PM) shape. An IPMSM is a structure in which PMs are buried in an inner rotor. The torque, torque ripple, and safety factor of IPMSM can vary depending on the position of the inserted PMs. To determine the optimal design variables according to the placement of the inserted PMs, parameter analysis was performed. Therefore, a response surface methodology, which is one of the statistical analysis design methods, was used. Among many other response surface methodologies, Box-Behnken design is the most commonly used. For the purpose of this research, Box-Behnken design was used to find the design parameter that can achieve minimum experimental variables of objective function. This paper determines the insert position of the PM to obtain high-torque, low-torque ripple by using a finite-element-method, and this paper obtains an optimal design by using a mechanical stiffness method in which a safety factor is considered.",
"title": ""
},
{
"docid": "ab430a12088341758de5cde60ef26070",
"text": "BACKGROUND\nThe nonselective 5-HT(4) receptor agonists, cisapride and tegaserod have been associated with cardiovascular adverse events (AEs).\n\n\nAIM\nTo perform a systematic review of the safety profile, particularly cardiovascular, of 5-HT(4) agonists developed for gastrointestinal disorders, and a nonsystematic summary of their pharmacology and clinical efficacy.\n\n\nMETHODS\nArticles reporting data on cisapride, clebopride, prucalopride, mosapride, renzapride, tegaserod, TD-5108 (velusetrag) and ATI-7505 (naronapride) were identified through a systematic search of the Cochrane Library, Medline, Embase and Toxfile. Abstracts from UEGW 2006-2008 and DDW 2008-2010 were searched for these drug names, and pharmaceutical companies approached to provide unpublished data.\n\n\nRESULTS\nRetrieved articles on pharmacokinetics, human pharmacodynamics and clinical data with these 5-HT(4) agonists, are reviewed and summarised nonsystematically. Articles relating to cardiac safety and tolerability of these agents, including any relevant case reports, are reported systematically. Two nonselective 5-HT(4) agonists had reports of cardiovascular AEs: cisapride (QT prolongation) and tegaserod (ischaemia). Interactions with, respectively, the hERG cardiac potassium channel and 5-HT(1) receptor subtypes have been suggested to account for these effects. No cardiovascular safety concerns were reported for the newer, selective 5-HT(4) agonists prucalopride, velusetrag, naronapride, or for nonselective 5-HT(4) agonists with no hERG or 5-HT(1) affinity (renzapride, clebopride, mosapride).\n\n\nCONCLUSIONS\n5-HT(4) agonists for GI disorders differ in chemical structure and selectivity for 5-HT(4) receptors. Selectivity for 5-HT(4) over non-5-HT(4) receptors may influence the agent's safety and overall risk-benefit profile. Based on available evidence, highly selective 5-HT(4) agonists may offer improved safety to treat patients with impaired GI motility.",
"title": ""
},
{
"docid": "62d940d69688bd66d30aca31eb98e256",
"text": "Recent years have seen increasing demand for treatments aimed at improving dental esthetics. In this context, both patients and dentists prefer to preserve dental structures as far as possible; thanks to technological advances, especially in adhesive dentistry, new materials and minimally invasive techniques such as \"no-prep\" (no preparation) veneers have made this possible. Nevertheless, no-prep veneers have specific indications and suffer certain disadvantages.\n\n\nOBJECTIVES\nThis clinical case describes the rehabilitation of the upper anterior region by means of no-prep veneers, with BOPT (Biologically Oriented Preparation Technique) cervical margins. The patient had requested an aesthetic treatment to improve irregularities of the gingival margins associated with the presence of diastemata resulting from microdontia. Key words:BOPT, micro-veneers, hybrid ceramic, ultra-fine veneers, diastemata, without prosthetic finish line, no-prep.",
"title": ""
},
{
"docid": "46a11a7ea2d8ada47e069b3ece775a32",
"text": "OBJECTIVE\nTo develop and validate a short questionnaire to estimate physical activity (PA) practice and sedentary behavior for the adult population.\n\n\nMETHODS\nThe short questionnaire was developed using data from a cross-sectional population-based survey (n = 6352) that included the Minnesota leisure-time PA questionnaire. Activities that explained a significant proportion of the variability of population PA practice were identified. Validation of the short questionnaire included a cross-sectional component to assess validity with respect to the data collected by accelerometers and a longitudinal component to assess reliability and sensitivity to detect changes (n = 114, aged 35 to 74 years).\n\n\nRESULTS\nSix types of activities that accounted for 87% of population variability in PA estimated with the Minnesota questionnaire were selected. The short questionnaire estimates energy expenditure in total PA and by intensity (light, moderate, vigorous), and includes 2 questions about sedentary behavior and a question about occupational PA. The short questionnaire showed high reliability, with intraclass correlation coefficients ranging between 0.79 to 0.95. The Spearman correlation coefficients between estimated energy expenditure obtained with the questionnaire and the number of steps detected by the accelerometer were as follows: 0.36 for total PA, 0.40 for moderate intensity, and 0.26 for vigorous intensity. The questionnaire was sensitive to detect changes in moderate and vigorous PA (correlation coefficients ranging from 0.26 to 0.34).\n\n\nCONCLUSION\nThe REGICOR short questionnaire is reliable, valid, and sensitive to detect changes in moderate and vigorous PA. This questionnaire could be used in daily clinical practice and epidemiological studies.",
"title": ""
},
{
"docid": "fd14b9e25affb05fd9b05036f3ce350b",
"text": "Recent advances in pedestrian detection are attained by transferring the learned features of Convolutional Neural Network (ConvNet) to pedestrians. This ConvNet is typically pre-trained with massive general object categories (e.g. ImageNet). Although these features are able to handle variations such as poses, viewpoints, and lightings, they may fail when pedestrian images with complex occlusions are present. Occlusion handling is one of the most important problem in pedestrian detection. Unlike previous deep models that directly learned a single detector for pedestrian detection, we propose DeepParts, which consists of extensive part detectors. DeepParts has several appealing properties. First, DeepParts can be trained on weakly labeled data, i.e. only pedestrian bounding boxes without part annotations are provided. Second, DeepParts is able to handle low IoU positive proposals that shift away from ground truth. Third, each part detector in DeepParts is a strong detector that can detect pedestrian by observing only a part of a proposal. Extensive experiments in Caltech dataset demonstrate the effectiveness of DeepParts, which yields a new state-of-the-art miss rate of 11:89%, outperforming the second best method by 10%.",
"title": ""
},
{
"docid": "09ecaf2cb56296c8097525b2c1ffb7dc",
"text": "Fruit and vegetables classification and recognition are still challenging in daily production and life. In this paper, we propose an efficient fruit and vegetables classification system using image saliency to draw the object regions and convolutional neural network (CNN) model to extract image features and implement classification. Image saliency is utilized to select main saliency regions according to saliency map. A VGG model is chosen to train for fruit and vegetables classification. Another contribution in this paper is that we establish a fruit and vegetables images database spanning 26 categories, which covers the major types in real life. Experiments are conducted on our own database, and the results show that our classification system achieves an excellent accuracy rate of 95.6%.",
"title": ""
},
{
"docid": "3111ef9867be7cf58be9694cbe2a14d9",
"text": "Grammatical Error Diagnosis for Chinese has always been a challenge for both foreign learners and NLP researchers, for the variousity of grammar and the flexibility of expression. In this paper, we present a model based on Bidirectional Long Short-Term Memory(Bi-LSTM) neural networks, which treats the task as a sequence labeling problem, so as to detect Chinese grammatical errors, to identify the error types and to locate the error positions. In the corpora of this year’s shared task, there can be multiple errors in a single offset of a sentence, to address which, we simutaneously train three Bi-LSTM models sharing word embeddings which label Missing, Redundant and Selection errors respectively. We regard word ordering error as a special kind of word selection error which is longer during training phase, and then separate them by length during testing phase. In NLP-TEA 3 shared task for Chinese Grammatical Error Diagnosis(CGED), Our system achieved relatively high F1 for all the three levels in the traditional Chinese track and for the detection level in the Simpified Chinese track.",
"title": ""
},
{
"docid": "1b2682d250ec1cddbb14303b14effef3",
"text": "This paper presents a path planning concept for trucks with trailers with kingpin hitching. This system is nonholonomic, has no flat output and is not stable in backwards driving direction. These properties are major challenges for path planning. The presented approach concentrates on the loading bay scenario. The considered task is to plan a path for the truck-trailer system from a start to a specified target configuration corresponding to the loading bay. Thereby, close distances to obstacles and multiple driving direction changes have to be handled. Furthermore, a so-called jackknife position has to be avoided. In a first step, an initial path is planned from the target to the start configuration using a tree-based path planner. Afterwards this path is refined locally by solving an optimal control problem. Due to the local nature of the planner, heuristic rules for direction changes are formulated. The performance of the proposed path planner is evaluated in simulation studies.",
"title": ""
},
{
"docid": "7bce92a72a19aef0079651c805883eb5",
"text": "Highly realistic virtual human models are rapidly becoming commonplace in computer graphics. These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. This paper studies the problem and solutions to automatic modeling of animatable virtual humans. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.",
"title": ""
},
{
"docid": "9fd5e182851ff0be67e8865c336a1f77",
"text": "Following the developments of wireless and mobile communication technologies, mobile-commerce (M-commerce) has become more and more popular. However, most of the existing M-commerce protocols do not consider the user anonymity during transactions. This means that it is possible to trace the identity of a payer from a M-commerce transaction. Luo et al. in 2014 proposed an NFC-based anonymous mobile payment protocol. It used an NFC-enabled smartphone and combined a built-in secure element (SE) as a trusted execution environment to build an anonymous mobile payment service. But their scheme has several problems and cannot be functional in practice. In this paper, we introduce a new NFC-based anonymous mobile payment protocol. Our scheme has the following features:(1) Anonymity. It prevents the disclosure of user's identity by using virtual identities instead of real identity during the transmission. (2) Efficiency. Confidentiality is achieved by symmetric key cryptography instead of public key cryptography so as to increase the performance. (3) Convenience. The protocol is based on NFC and is EMV compatible. (4) Security. All the transaction is either encrypted or signed by the sender so the confidentiality and authenticity are preserved.",
"title": ""
},
{
"docid": "1c386cf468f62a812640b7f8b528bb73",
"text": "An efficient nanomedical platform that can combine two-photon cell imaging, near infrared (NIR) light and pH dual responsive drug delivery, and photothermal treatment was successfully developed based on fluorescent porous carbon-nanocapsules (FPC-NCs, size ∼100 nm) with carbon dots (CDs) embedded in the shell. The stable, excitation wavelength (λex)-tunable and upconverted fluorescence from the CDs embedded in the porous carbon shell enable the FPC-NCs to serve as an excellent confocal and two-photon imaging contrast agent under the excitation of laser with a broad range of wavelength from ultraviolet (UV) light (405 nm) to NIR light (900 nm). The FPC-NCs demonstrate a very high loading capacity (1335 mg g(-1)) toward doxorubicin drug benefited from the hollow cavity structure, porous carbon shell, as well as the supramolecular π stacking and electrostatic interactions between the doxorubicin molecules and carbon shell. In addition, a responsive release of doxorubicin from the FPC-NCs can be activated by lowering the pH to acidic (from 7.4 to 5.0) due to the presence of pH-sensitive carboxyl groups on the FPC-NCs and amino groups on doxorubicin molecules. Furthermore, the FPC-NCs can absorb and effectively convert the NIR light to heat, thus, manifest the ability of NIR-responsive drug release and combined photothermal/chemo-therapy for high therapeutic efficacy.",
"title": ""
},
{
"docid": "42d3f666325c3c9e2d61fcbad3c6659a",
"text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.",
"title": ""
},
{
"docid": "a0b862a758c659b62da2114143bf7687",
"text": "The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of over-sampling techniques that remedies this situation. It generates minority instances within the overlapping regions. However, SMOTE randomly synthesizes the minority instances along a line joining a minority instance and its selected nearest neighbours, ignoring nearby majority instances. Our technique called SafeLevel-SMOTE carefully samples minority instances along the same line with different weight degree, called safe level. The safe level computes by using nearest neighbour minority instances. By synthesizing the minority instances more around larger safe level, we achieve a better accuracy performance than SMOTE and Borderline-SMOTE.",
"title": ""
}
] |
scidocsrr
|
01422d25c62ea15fd60f954b897e18ca
|
Localization in highly dynamic environments using dual-timescale NDT-MCL
|
[
{
"docid": "0203b3995c21e5e7026fe787eaef6e09",
"text": "Pose SLAM is the variant of simultaneous localization and map building (SLAM) is the variant of SLAM, in which only the robot trajectory is estimated and where landmarks are only used to produce relative constraints between robot poses. To reduce the computational cost of the information filter form of Pose SLAM and, at the same time, to delay inconsistency as much as possible, we introduce an approach that takes into account only highly informative loop-closure links and nonredundant poses. This approach includes constant time procedures to compute the distance between poses, the expected information gain for each potential link, and the exact marginal covariances while moving in open loop, as well as a procedure to recover the state after a loop closure that, in practical situations, scales linearly in terms of both time and memory. Using these procedures, the robot operates most of the time in open loop, and the cost of the loop closure is amortized over long trajectories. This way, the computational bottleneck shifts to data association, which is the search over the set of previously visited poses to determine good candidates for sensor registration. To speed up data association, we introduce a method to search for neighboring poses whose complexity ranges from logarithmic in the usual case to linear in degenerate situations. The method is based on organizing the pose information in a balanced tree whose internal levels are defined using interval arithmetic. The proposed Pose-SLAM approach is validated through simulations, real mapping sessions, and experiments using standard SLAM data sets.",
"title": ""
},
{
"docid": "48903eded4e1a88114e3917e2e6173b6",
"text": "The problem of generating maps with mobile robots has received considerable attention over the past years. Most of the techniques developed so far have been designed for situations in which the environment is static during the mapping process. Dynamic objects, however, can lead to serious errors in the resulting maps such as spurious objects or misalignments due to localization errors. In this paper we consider the problem of creating maps with mobile robots in dynamic environments. We present a new approach that interleaves mapping and localization with a probabilistic technique to identify spurious measurements. In several experiments we demonstrate that our algorithm generates accurate 2d and 3d in different kinds of dynamic indoor and outdoor environments. We also use our algorithm to isolate the dynamic objects and to generate three-dimensional representation of them.",
"title": ""
},
{
"docid": "bef4cf486ddc37d8ff4d5ed7a2b72aba",
"text": "We propose an on-line algorithm for simultaneous localization and mapping of dynamic environments. Our algorithm is capable of differentiating static and dynamic parts of the environment and representing them appropriately on the map. Our approach is based on maintaining two occupancy grids. One grid models the static parts of the environment, and the other models the dynamic parts of the environment. The union of the two grid maps provides a complete description of the environment over time. We also maintain a third map containing information about static landmarks detected in the environment. These landmarks provide the robot with localization. Results in simulation and real robots experiments show the efficiency of our approach and also show how the differentiation of dynamic and static entities in the environment and SLAM can be mutually beneficial.",
"title": ""
}
] |
[
{
"docid": "f136e875f021ea3ea67a87c6d0b1e869",
"text": "Platelet-rich plasma (PRP) has been utilized for many years as a regenerative agent capable of inducing vascularization of various tissues using blood-derived growth factors. Despite this, drawbacks mostly related to the additional use of anti-coagulants found in PRP have been shown to inhibit the wound healing process. For these reasons, a novel platelet concentrate has recently been developed with no additives by utilizing lower centrifugation speeds. The purpose of this study was therefore to investigate osteoblast behavior of this novel therapy (injectable-platelet-rich fibrin; i-PRF, 100% natural with no additives) when compared to traditional PRP. Human primary osteoblasts were cultured with either i-PRF or PRP and compared to control tissue culture plastic. A live/dead assay, migration assay as well as a cell adhesion/proliferation assay were investigated. Furthermore, osteoblast differentiation was assessed by alkaline phosphatase (ALP), alizarin red and osteocalcin staining, as well as real-time PCR for genes encoding Runx2, ALP, collagen1 and osteocalcin. The results showed that all cells had high survival rates throughout the entire study period irrespective of culture-conditions. While PRP induced a significant 2-fold increase in osteoblast migration, i-PRF demonstrated a 3-fold increase in migration when compared to control tissue-culture plastic and PRP. While no differences were observed for cell attachment, i-PRF induced a significantly higher proliferation rate at three and five days when compared to PRP. Furthermore, i-PRF induced significantly greater ALP staining at 7 days and alizarin red staining at 14 days. A significant increase in mRNA levels of ALP, Runx2 and osteocalcin, as well as immunofluorescent staining of osteocalcin was also observed in the i-PRF group when compared to PRP. In conclusion, the results from the present study favored the use of the naturally-formulated i-PRF when compared to traditional PRP with anti-coagulants. Further investigation into the direct role of fibrin and leukocytes contained within i-PRF are therefore warranted to better elucidate their positive role in i-PRF on tissue wound healing.",
"title": ""
},
{
"docid": "46950519803aba56a0cce475964b99d7",
"text": "The coverage problem in the field of robotics is the problem of moving a sensor or actuator over all points in a given region. Example applications of this problem are lawn mowing, spray painting, and aerial or underwater mapping. In this paper, I consider the single-robot offline version of this problem, i.e. given a map of the region to be covered, plan an efficient path for a single robot that sweeps the sensor or actuator over all points. One basic approach to this problem is to decompose the region into subregions, select a sequence of those subregions, and then generate a path that covers each subregion in turn. This paper addresses the problem of creating a good decomposition. Under certain assumptions, the cost to cover a polygonal subregion is proportional to its minimum altitude. An optimal decomposition then minimizes the sum of subregion altitudes. This paper describes an algorithm to find the minimal sum of altitudes (MSA) decomposition of a region with a polygonal boundary and polygonal holes. This algorithm creates an initial decomposition based upon multiple line sweeps and then applies dynamic programming to find the optimal decomposition. This paper describes the algorithm and reports results from an implementation. Several appendices give details and proofs regarding line sweep algorithms.",
"title": ""
},
{
"docid": "f74aa960091bef1701dbc616657facb3",
"text": "Adverse reactions and unintended effects can occasionally occur with toxins for cosmetic use, even although they generally have an outstanding safety profile. As the use of fillers becomes increasingly more common, adverse events can be expected to increase as well. This article discusses complication avoidance, addressing appropriate training and proper injection techniques, along with patient selection and patient considerations. In addition to complications, avoidance or amelioration of common adverse events is discussed.",
"title": ""
},
{
"docid": "fdbe390730b949ccaa060a84257af2f1",
"text": "An increase in the prevalence of chronic disease has led to a rise in the demand for primary healthcare services in many developed countries. Healthcare technology tools may provide the leverage to alleviate the shortage of primary care providers. Here we describe the development and usage of an automated healthcare kiosk for the management of patients with stable chronic disease in the primary care setting. One-hundred patients with stable chronic disease were recruited from a primary care clinic. They used a kiosk in place of doctors’ consultations for two subsequent follow-up visits. Patient and physician satisfaction with kiosk usage were measured on a Likert scale. Kiosk blood pressure measurements and triage decisions were validated and optimized. Patients were assessed if they could use the kiosk independently. Patients and physicians were satisfied with all areas of kiosk usage. Kiosk triage decisions were accurate by the 2nd month of the study. Blood pressure measurements by the kiosk were equivalent to that taken by a nurse (p = 0.30, 0.14). Independent kiosk usage depended on patients’ language skills and educational levels. Healthcare kiosks represent an alternative way to manage patients with stable chronic disease. They have the potential to replace physician visits and improve access to primary healthcare. Patients welcome the use of healthcare technology tools, including those with limited literacy and education. Optimization of environmental and patient factors may be required prior to the implementation of kiosk-based technology in the healthcare setting.",
"title": ""
},
{
"docid": "f098bcf49fc82868cdc89a159e0c49eb",
"text": "Progressive Visual Analytics aims at improving the interactivity in existing analytics techniques by means of visualization as well as interaction with intermediate results. One key method for data analysis is dimensionality reduction, for example, to produce 2D embeddings that can be visualized and analyzed efficiently. t-Distributed Stochastic Neighbor Embedding (tSNE) is a well-suited technique for the visualization of high-dimensional data. tSNE can create meaningful intermediate results but suffers from a slow initialization that constrains its application in Progressive Visual Analytics. We introduce a controllable tSNE approximation (A-tSNE), which trades off speed and accuracy, to enable interactive data exploration. We offer real-time visualization techniques, including a density-based solution and a Magic Lens to inspect the degree of approximation. With this feedback, the user can decide on local refinements and steer the approximation level during the analysis. We demonstrate our technique with several datasets, in a real-world research scenario and for the real-time analysis of high-dimensional streams to illustrate its effectiveness for interactive data analysis.",
"title": ""
},
{
"docid": "0c4f02b3b361d60da1aec0f0c100dcf9",
"text": "Architecture Compliance Checking (ACC) is an approach to verify the conformance of implemented program code to high-level models of architectural design. ACC is used to prevent architectural erosion during the development and evolution of a software system. Static ACC, based on static software analysis techniques, focuses on the modular architecture and especially on rules constraining the modular elements. A semantically rich modular architecture (SRMA) is expressive and may contain modules with different semantics, like layers and subsystems, constrained by rules of different types. To check the conformance to an SRMA, ACC-tools should support the module and rule types used by the architect. This paper presents requirements regarding SRMA support and an inventory of common module and rule types, on which basis eight commercial and non-commercial tools were tested. The test results show large differences between the tools, but all could improve their support of SRMA, what might contribute to the adoption of ACC in practice.",
"title": ""
},
{
"docid": "a84d2de19a34b914e583c9f4379b68da",
"text": "English) xx Abstract(Arabic) xxiiArabic) xxii",
"title": ""
},
{
"docid": "22ad829acba8d8a0909f2b8e31c1f0c3",
"text": "Covariance matrices capture correlations that are invaluable in modeling real-life datasets. Using all d elements of the covariance (in d dimensions) is costly and could result in over-fitting; and the simple diagonal approximation can be over-restrictive. In this work, we present a new model, the Low-Rank Gaussian Mixture Model (LRGMM), for modeling data which can be extended to identifying partitions or overlapping clusters. The curse of dimensionality that arises in calculating the covariance matrices of the GMM is countered by using low-rank perturbed diagonal matrices. The efficiency is comparable to the diagonal approximation, yet one can capture correlations among the dimensions. Our experiments reveal the LRGMM to be an efficient and highly applicable tool for working with large high-dimensional datasets.",
"title": ""
},
{
"docid": "4ee17de5de87d923fafc9dbbe7266f2b",
"text": "Introduction Researchers have agreed that a favorable corporate reputation is one of the most important intangible assets driving company performance (Chun 2005; Fisher-Buttinger and Vallaster 2011; Gibson et al. 2006). Not to be confused with brand identity and image, corporate reputation is often defined as consumers’ accumulated opinions, perceptions, and attitudes towards the company (Fombrun et al. 2000; Fombrun and Shanley 1990; Hatch and Schultz 2001; Weigelt and Camerer 1988). In addition, corporate reputation is established by individuals’ relative perspective; thus, corporate reputation is closely linked to the consumers’ subjective evaluation about the company (Fombrun and Shanley 1990; Weigelt and Camerer 1988). The effect of corporate reputation on corporate performance has been supported in many articles. Earlier studies have reported that a positive reputation has a significant Abstract",
"title": ""
},
{
"docid": "17db752bfc7ce75ded5b3836c5ae3dd7",
"text": "Knowledge-based question answering relies on the availability of facts, the majority of which cannot be found in structured sources (e.g. Wikipedia info-boxes, Wikidata). One of the major components of extracting facts from unstructured text is Relation Extraction (RE). In this paper we propose a novel method for creating distant (weak) supervision labels for training a large-scale RE system. We also provide new evidence about the effectiveness of neural network approaches by decoupling the model architecture from the feature design of a state-of-the-art neural network system. Surprisingly, a much simpler classifier trained on similar features performs on par with the highly complex neural network system (at 75x reduction to the training time), suggesting that the features are a bigger contributor to the final performance.",
"title": ""
},
{
"docid": "685a3c1eee19ee71c36447c49aca757f",
"text": "Advanced diagnostic technologies, such as polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA), have been widely used in well-equipped laboratories. However, they are not affordable or accessible in resource-limited settings due to the lack of basic infrastructure and/or trained operators. Paper-based diagnostic technologies are affordable, user-friendly, rapid, robust, and scalable for manufacturing, thus holding great potential to deliver point-of-care (POC) diagnostics to resource-limited settings. In this review, we present the working principles and reaction mechanism of paper-based diagnostics, including dipstick assays, lateral flow assays (LFAs), and microfluidic paper-based analytical devices (μPADs), as well as the selection of substrates and fabrication methods. Further, we report the advances in improving detection sensitivity, quantification readout, procedure simplification and multi-functionalization of paper-based diagnostics, and discuss the disadvantages of paper-based diagnostics. We envision that miniaturized and integrated paper-based diagnostic devices with the sample-in-answer-out capability will meet the diverse requirements for diagnosis and treatment monitoring at the POC.",
"title": ""
},
{
"docid": "59a4bf897006a0bcadb562ff6446e4e5",
"text": "As the number and variety of cyber threats increase, it becomes more critical to share intelligence information in a fast and efficient manner. However, current cyber threat intelligence data do not contain sufficient information about how to specify countermeasures or how institutions should apply countermeasures automatically on their networks. A flexible and agile network architecture is required in order to determine and deploy countermeasures quickly. Software-defined networks facilitate timely application of cyber security measures thanks to their programmability. In this work, we propose a novel model for producing software-defined networking-based solutions against cyber threats and configuring networks automatically using risk analysis. We have developed a prototype implementation of the proposed model and demonstrated the applicability of the model. Furthermore, we have identified and presented future research directions in this area.",
"title": ""
},
{
"docid": "dcece9a321b4483de7327de29a641fd2",
"text": "A class of optimal control problems for quasilinear elliptic equations is considered, where the coefficients of the elliptic differential operator depend on the state function. Firstand second-order optimality conditions are discussed for an associated control-constrained optimal control problem. In particular, the Pontryagin maximum principle and second-order sufficient optimality conditions are derived. One of the main difficulties is the non-monotone character of the state equation.",
"title": ""
},
{
"docid": "fb9afce9cc1683cb3adc2e3c747758e4",
"text": "Hundreds of millions of users each day use web search engines to meet their information needs. Advances in web search effectiveness are therefore perhaps the most significant public outcomes of IR research. Query expansion is one such method for improving the effectiveness of ranked retrieval by adding additional terms to a query. In previous approaches to query expansion, the additional terms are selected from highly ranked documents returned from an initial retrieval run. We propose a new method of obtaining expansion terms, based on selecting terms from past user queries that are associated with documents in the collection. Our scheme is effective for query expansion for web retrieval: our results show relative improvements over unexpanded full text retrieval of 26%--29%, and 18%--20% over an optimised, conventional expansion approach.",
"title": ""
},
{
"docid": "d0992076bfbf8cac6fd66c5bbfb671eb",
"text": "In this paper, we propose a supervised model for ranking word importance that incorporates a rich set of features. Our model is superior to prior approaches for identifying words used in human summaries. Moreover we show that an extractive summarizer which includes our estimation of word importance results in summaries comparable with the state-of-the-art by automatic evaluation. Disciplines Computer Engineering | Computer Sciences Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-14-02. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/989 Improving the Estimation of Word Importance for News Multi-Document Summarization Extended Technical Report Kai Hong University of Pennsylvania Philadelphia, PA, 19104 [email protected] Ani Nenkova University of Pennsylvania Philadelphia, PA, 19104 [email protected]",
"title": ""
},
{
"docid": "3e6dbaf4ef18449c82e29e878fa9a8c5",
"text": "The description of a software architecture style must include the structural model of the components and their interactions, the laws governing the dynamic changes in the architecture, and the communication pattern. In our work we represent a system as a graph where hyperedges are components and nodes are ports of communication. The construction and dynamic evolut,ion of the style will be represented as context-free productions and graph rewriting. To model the evolution of the system we propose to use techniques of constraint solving. From this approach we obtain an intuitive way to model systems with nice characteristics for the description of dynamic architectures and reconfiguration and, a unique language to describe the style, model the evolution of the system and prove properties.",
"title": ""
},
{
"docid": "7e58396148d8e8c8ca7d3439c6b5c872",
"text": "The traditional inductor-based buck converter has been the dominant design for step-down switched-mode voltage regulators for decades. Switched-capacitor (SC) DC-DC converters, on the other hand, have traditionally been used in low- power (<;10mW) and low-conversion-ratio (<;4:1) applications where neither regulation nor efficiency is critical. However, a number of SC converter topologies are very effective in their utilization of switches and passive elements, especially in relation to the ever-popular buck converters [1,2,5]. This work encompasses the complete design, fabrication, and test of a CMOS-based switched-capacitor DC-DC converter, addressing the ubiquitous 12 to 1.5V board-mounted point-of-load application. In particular, the circuit developed in this work attains higher efficiency (92% peak, and >;80% over a load range of 5mA to 1A) than surveyed competitive buck converters, while requiring less board area and less costly passive components. The topology and controller enable a wide input voltage (V!N) range of 7.5 to 13.5V with an output voltage (Vοuτ) of 1.5V Control techniques based on feedback and feedforward provide tight regulation (30mVpp) under worst-case load-step (1A) conditions. This work shows that SC converters can outperform buck converters, and thus the scope of SC converter applications can and should be expanded.",
"title": ""
},
{
"docid": "b0fac0b564e662b43c77593902e502fc",
"text": "Three methods for fitting the diffusion model (Ratcliff, 1978) to experimental data are examined. Sets of simulated data were generated with known parameter values, and from fits of the model, we found that the maximum likelihood method was better than the chi-square and weighted least squares methods by criteria of bias in the parameters relative to the parameter values used to generate the data and standard deviations in the parameter estimates. The standard deviations in the parameter values can be used as measures of the variability in parameter estimates from fits to experimental data. We introduced contaminant reaction times and variability into the other components of processing besides the decision process and found that the maximum likelihood and chi-square methods failed, sometimes dramatically. But the weighted least squares method was robust to these two factors. We then present results from modifications of the maximum likelihood and chi-square methods, in which these factors are explicitly modeled, and show that the parameter values of the diffusion model are recovered well. We argue that explicit modeling is an important method for addressing contaminants and variability in nondecision processes and that it can be applied in any theoretical approach to modeling reaction time.",
"title": ""
},
{
"docid": "127bbcb3df6c43c2d791929426e5e087",
"text": "Uplift modeling is a classification method that determines the incremental impact of an action on a given population. Uplift modeling aims at maximizing the area under the uplift curve, which is the difference between the subject and control sets’ area under the lift curve. Lift and uplift curves are seldom used outside of the marketing domain, whereas the related ROC curve is frequently used in multiple areas. Achieving a good uplift using an ROC-based model instead of lift may be more intuitive in several areas, and may help uplift modeling reach a wider audience. We alter SAYL, an uplift-modeling statistical relational learner, to use ROC instead of lift. We test our approach on a screening mammography dataset. SAYL-ROC outperforms SAYL on our data, though not significantly, suggesting that ROC can be used for uplift modeling. On the other hand, SAYL-ROC returns larger models, reducing interpretability.",
"title": ""
},
{
"docid": "f6a9670544a784a5fc431746557473a3",
"text": "Massive multiple-input multiple-output (MIMO) systems are cellular networks where the base stations (BSs) are equipped with unconventionally many antennas, deployed on co-located or distributed arrays. Huge spatial degrees-of-freedom are achieved by coherent processing over these massive arrays, which provide strong signal gains, resilience to imperfect channel knowledge, and low interference. This comes at the price of more infrastructure; the hardware cost and circuit power consumption scale linearly/affinely with the number of BS antennas N. Hence, the key to cost-efficient deployment of large arrays is low-cost antenna branches with low circuit power, in contrast to today's conventional expensive and power-hungry BS antenna branches. Such low-cost transceivers are prone to hardware imperfections, but it has been conjectured that the huge degrees-of-freedom would bring robustness to such imperfections. We prove this claim for a generalized uplink system with multiplicative phase-drifts, additive distortion noise, and noise amplification. Specifically, we derive closed-form expressions for the user rates and a scaling law that shows how fast the hardware imperfections can increase with N while maintaining high rates. The connection between this scaling law and the power consumption of different transceiver circuits is rigorously exemplified. This reveals that one can make √N the circuit power increase as N, instead of linearly, by careful circuit-aware system design.",
"title": ""
}
] |
scidocsrr
|
d1fed528c5a08bb4995f74ffe1391fa8
|
Structure and function of auditory cortex: music and speech
|
[
{
"docid": "a411780d406e8b720303d18cd6c9df68",
"text": "Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.",
"title": ""
}
] |
[
{
"docid": "a24b4546eb2da7ce6ce70f45cd16e07d",
"text": "This paper examines the state of the art in mobile clinical and health-related apps. A 2012 estimate puts the number of health-related apps at no fewer than 40,000, as healthcare professionals and consumers continue to express concerns about the quality of many apps, calling for some form of app regulatory control or certification to be put in place. We describe the range of apps on offer as of 2013, and then present a brief survey of evaluation studies of medical and health-related apps that have been conducted to date, covering a range of clinical disciplines and topics. Our survey includes studies that highlighted risks, negative issues and worrying deficiencies in existing apps. We discuss the concept of 'apps as a medical device' and the relevant regulatory controls that apply in USA and Europe, offering examples of apps that have been formally approved using these mechanisms. We describe the online Health Apps Library run by the National Health Service in England and the calls for a vetted medical and health app store. We discuss the ingredients for successful apps beyond the rather narrow definition of 'apps as a medical device'. These ingredients cover app content quality, usability, the need to match apps to consumers' general and health literacy levels, device connectivity standards (for apps that connect to glucometers, blood pressure monitors, etc.), as well as app security and user privacy. 'Happtique Health App Certification Program' (HACP), a voluntary app certification scheme, successfully captures most of these desiderata, but is solely focused on apps targeting the US market. HACP, while very welcome, is in ways reminiscent of the early days of the Web, when many \"similar\" quality benchmarking tools and codes of conduct for information publishers were proposed to appraise and rate online medical and health information. It is probably impossible to rate and police every app on offer today, much like in those early days of the Web, when people quickly realised the same regarding informational Web pages. The best first line of defence was, is, and will always be to educate consumers regarding the potentially harmful content of (some) apps.",
"title": ""
},
{
"docid": "6d1f374686b98106ab4221066607721b",
"text": "How does one instigate a scientific revolution, or more modestly, a shift of scientific paradigm? This must have been on the minds of the organizers of the two conferences \"The Economy as an Evolving Complex System, I and II\" and the research program in economics at the Santa Fe Institute documented in the present volume and its predecessor of ten years ago.(1) Their strategy might be reconstructed as follows. First, the stranglehold of neoclassical economics on the Anglo-Saxon academic community since World War II is at least partly due to the ascendancy of mathematical rigor as the touchstone of serious economic theorizing. Thus if one could beat the prevailing paradigm at its own game one would immediately have a better footing in the community than the heretics, mostly from the left or one of the variousìnstitu-tional' camps, who had been sniping at it from the sidelines all the while but were never above the suspicion of not being mathematically up to comprehending it in the first place. Second, one could enlist both prominent representatives and path-breaking methods from the natural sciences to legitimize the introduction of (to economists) fresh and in some ways disturbing approaches to the subject. This was particularly the tack taken in 1987, where roughly equal numbers of scientists and economists were brought together in an extensive brain storming session. Physics has always been the role model for other aspiring`hard' sciences, and physicists seem to have succeeded in institutional-izing a `permanent revolution' in their own methodology , i.e., they are relatively less dogmatic and willing to be more eclectic in the interests of getting results. The fact that, with the exception of a brief chapter by Philip Anderson in the present volume, physicists as representatives of their discipline are no longer present, presumably indicates that their services can now be dispensed with in this enterprise.(2) Finally, one should sponsor research of the highest caliber, always laudable in itself, and make judicious use of key personalities. Care should also be taken that the work is of a form and style which, rather than explicitly provoking the profession, makes it appear as if it were the natural generalization of previous mainstream research and thus reasonably amenable to inclusion in the canon. This while tacitly encouraging and profiting from a wave of publicity in the popular media , a difficult line to tread if one does not want to appear …",
"title": ""
},
{
"docid": "e0c71e449f4c155a993ae04ece4bc822",
"text": "This paper shows how one can directly apply natural language processing (NLP) methods to classification problems in cheminformatics. Connection between these seemingly separate fields is shown by considering standard textual representation of compound, SMILES. The problem of activity prediction against a target protein is considered, which is a crucial part of computer aided drug design process. Conducted experiments show that this way one can not only outrank state of the art results of hand crafted representations but also gets direct structural insights into the way decisions are made.",
"title": ""
},
{
"docid": "f4b6f3b281a420999b60b38c245113a6",
"text": "There is growing interest in using intranasal oxytocin (OT) to treat social dysfunction in schizophrenia and bipolar disorders (i.e., psychotic disorders). While OT treatment results have been mixed, emerging evidence suggests that OT system dysfunction may also play a role in the etiology of metabolic syndrome (MetS), which appears in one-third of individuals with psychotic disorders and associated with increased mortality. Here we examine the evidence for a potential role of the OT system in the shared risk for MetS and psychotic disorders, and its prospects for ameliorating MetS. Using several studies to demonstrate the overlapping neurobiological profiles of metabolic risk factors and psychiatric symptoms, we show that OT system dysfunction may be one common mechanism underlying MetS and psychotic disorders. Given the critical need to better understand metabolic dysregulation in these disorders, future OT trials assessing behavioural and cognitive outcomes should additionally include metabolic risk factor parameters.",
"title": ""
},
{
"docid": "8612b5e8f00fd8469ba87f1514b69fd0",
"text": "Online gaming is one of the most profitable businesses on the Internet. Among various threats to continuous player subscriptions, network lags are particularly notorious. It is widely known that frequent and long lags frustrate game players, but whether the players actually take action and leave a game is unclear. Motivated to answer this question, we apply survival analysis to a 1, 356-million-packet trace from a sizeable MMORPG, called ShenZhou Online. We find that both network delay and network loss significantly affect a player’s willingness to continue a game. For ShenZhou Online, the degrees of player “intolerance” of minimum RTT, RTT jitter, client loss rate, and server loss rate are in the proportion of 1:2:11:6. This indicates that 1) while many network games provide “ping time,” i.e., the RTT, to players to facilitate server selection, it would be more useful to provide information about delay jitters; and 2) players are much less tolerant of network loss than delay. This is due to the game designer’s decision to transfer data in TCP, where packet loss not only results in additional packet delays due to in-order delivery and retransmission, but also a lower sending rate.",
"title": ""
},
{
"docid": "63663dbc320556f7de09b5060f3815a6",
"text": "There has been a long history of applying AI technologies to address software engineering problems especially on tool automation. On the other hand, given the increasing importance and popularity of AI software, recent research efforts have been on exploring software engineering solutions to improve the productivity of developing AI software and the dependability of AI software. The emerging field of intelligent software engineering is to focus on two aspects: (1) instilling intelligence in solutions for software engineering problems; (2) providing software engineering solutions for intelligent software. This extended abstract shares perspectives on these two aspects of intelligent software engineering.",
"title": ""
},
{
"docid": "ddc56e9f2cbe9c086089870ccec7e510",
"text": "Serotonin is an ancient monoamine neurotransmitter, biochemically derived from tryptophan. It is most abundant in the gastrointestinal tract, but is also present throughout the rest of the body of animals and can even be found in plants and fungi. Serotonin is especially famous for its contributions to feelings of well-being and happiness. More specifically it is involved in learning and memory processes and is hence crucial for certain behaviors throughout the animal kingdom. This brief review will focus on the metabolism, biological role and mode-of-action of serotonin in insects. First, some general aspects of biosynthesis and break-down of serotonin in insects will be discussed, followed by an overview of the functions of serotonin, serotonin receptors and their pharmacology. Throughout this review comparisons are made with the vertebrate serotonergic system. Last but not least, possible applications of pharmacological adjustments of serotonin signaling in insects are discussed.",
"title": ""
},
{
"docid": "83aa2a89f8ecae6a84134a2736a5bb22",
"text": "The activity of dozens of simultaneously recorded neurons can be used to control the movement of a robotic arm or a cursor on a computer screen. This motor neural prosthetic technology has spurred an increased interest in the algorithms by which motor intention can be inferred. The simplest of these algorithms is the population vector algorithm (PVA), where the activity of each cell is used to weight a vector pointing in that neuron's preferred direction. Off-line, it is possible to show that more complicated algorithms, such as the optimal linear estimator (OLE), can yield substantial improvements in the accuracy of reconstructed hand movements over the PVA. We call this open-loop performance. In contrast, this performance difference may not be present in closed-loop, on-line control. The obvious difference between open and closed-loop control is the ability to adapt to the specifics of the decoder in use at the time. In order to predict performance gains that an algorithm may yield in closed-loop control, it is necessary to build a model that captures aspects of this adaptation process. Here we present a framework for modeling the closed-loop performance of the PVA and the OLE. Using both simulations and experiments, we show that (1) the performance gain with certain decoders can be far less extreme than predicted by off-line results, (2) that subjects are able to compensate for certain types of bias in decoders, and (3) that care must be taken to ensure that estimation error does not degrade the performance of theoretically optimal decoders.",
"title": ""
},
{
"docid": "7d8884a7f6137068f8ede464cf63da5b",
"text": "Object detection and localization is a crucial step for inspection and manipulation tasks in robotic and industrial applications. We present an object detection and localization scheme for 3D objects that combines intensity and depth data. A novel multimodal, scale- and rotation-invariant feature is used to simultaneously describe the object's silhouette and surface appearance. The object's position is determined by matching scene and model features via a Hough-like local voting scheme. The proposed method is quantitatively and qualitatively evaluated on a large number of real sequences, proving that it is generic and highly robust to occlusions and clutter. Comparisons with state of the art methods demonstrate comparable results and higher robustness with respect to occlusions.",
"title": ""
},
{
"docid": "850becfa308ce7e93fea77673db8ab50",
"text": "Controlled generation of text is of high practical use. Recent efforts have made impressive progress in generating or editing sentences with given textual attributes (e.g., sentiment). This work studies a new practical setting of text content manipulation. Given a structured record, such as (PLAYER: Lebron, POINTS: 20, ASSISTS: 10), and a reference sentence, such as Kobe easily dropped 30 points, we aim to generate a sentence that accurately describes the full content in the record, with the same writing style (e.g., wording, transitions) of the reference. The problem is unsupervised due to lack of parallel data in practice, and is challenging to minimally yet effectively manipulate the text (by rewriting/adding/deleting text portions) to ensure fidelity to the structured content. We derive a dataset from a basketball game report corpus as our testbed, and develop a neural method with unsupervised competing objectives and explicit content coverage constraints. Automatic and human evaluations show superiority of our approach over competitive methods including a strong rule-based baseline and prior approaches designed for style transfer.",
"title": ""
},
{
"docid": "7e127a6f25e932a67f333679b0d99567",
"text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.",
"title": ""
},
{
"docid": "ff1ed09b9952f9d0b67d6f6bb1cd507a",
"text": "Microblogging websites have emerged to the center of information production and diffusion, on which people can get useful information from other users’ microblog posts. In the era of Big Data, we are overwhelmed by the large amount of microblog posts. To make good use of these informative data, an effective search tool is required specialized for microblog posts. However, it is not trivial to do microblog search due to the following reasons: 1) microblog posts are noisy and time-sensitive rendering general information retrieval models ineffective. 2) Conventional IR models are not designed to consider microblog-specific features. In this paper, we propose to utilize learning to rank model for microblog search. We combine content-based, microblog-specific and temporal features into learning to rank models, which are found to model microblog posts effectively. To study the performance of learning to rank models, we evaluate our models using tweet data set provided by TERC 2011 and TREC 2012 microblogs track with the comparison of three stateof-the-art information retrieval baselines, vector space model, language model, BM25 model. Extensive experimental studies demonstrate the effectiveness of learning to rank models and the usefulness to integrate microblog-specific and temporal information for microblog search task.",
"title": ""
},
{
"docid": "d4a96cc393a3f1ca3bca94a57e07941e",
"text": "With the increasing number of scientific publications, research paper recommendation has become increasingly important for scientists. Most researchers rely on keyword-based search or following citations in other papers, in order to find relevant research articles. And usually they spend a lot of time without getting satisfactory results. This study aims to propose a personalized research paper recommendation system, that facilitate this task by recommending papers based on users' explicit and implicit feedback. The users will be allowed to explicitly specify the papers of interest. In addition, user activities (e.g., viewing abstracts or full-texts) will be analyzed in order to enhance users' profiles. Most of the current research paper recommendation and information retrieval systems use the classical bag-of-words methods, which don't consider the context of the words and the semantic similarity between the articles. This study will use Recurrent Neural Networks (RNNs) to discover continuous and latent semantic features of the papers, in order to improve the recommendation quality. The proposed approach utilizes PubMed so far, since it is frequently used by physicians and scientists, but it can easily incorporate other datasets in the future.",
"title": ""
},
{
"docid": "188c55ef248f7021a66c1f2e05c2fc98",
"text": "The objective of the proposed study is to explore the performance of credit scoring using a two-stage hybrid modeling procedure with artificial neural networks and multivariate adaptive regression splines (MARS). The rationale under the analyses is firstly to use MARS in building the credit scoring model, the obtained significant variables are then served as the input nodes of the neural networks model. To demonstrate the effectiveness and feasibility of the proposed modeling procedure, credit scoring tasks are performed on one bank housing loan dataset using cross-validation approach. As the results reveal, the proposed hybrid approach outperforms the results using discriminant analysis, logistic regression, artificial neural networks and MARS and hence provides an alternative in handling credit scoring tasks. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "70b6abe2cb82eead9235612c1a1998d7",
"text": "PURPOSE\nThe aim of the study was to investigate white blood cell counts and neutrophil to lymphocyte ratio (NLR) as markers of systemic inflammation in the diagnosis of localized testicular cancer as a malignancy with initially low volume.\n\n\nMATERIALS AND METHODS\nThirty-six patients with localized testicular cancer with a mean age of 34.22±14.89 years and 36 healthy controls with a mean age of 26.67±2.89 years were enrolled in the study. White blood cell counts and NLR were calculated from complete blood cell counts.\n\n\nRESULTS\nWhite blood cell counts and NLR were statistically significantly higher in patients with testicular cancer compared with the control group (p<0.0001 for all).\n\n\nCONCLUSIONS\nBoth white blood cell counts and NLR can be used as a simple test in the diagnosis of testicular cancer besides the well-known accurate serum tumor markers as AFP (alpha fetoprotein), hCG (human chorionic gonadotropin) and LDH (lactate dehydrogenase).",
"title": ""
},
{
"docid": "655413f10d0b99afd15d54d500c9ffb6",
"text": "Herbal medicine (phytomedicine) uses remedies possessing significant pharmacological activity and, consequently, potential adverse effects and drug interactions. The explosion in sales of herbal therapies has brought many products to the marketplace that do not conform to the standards of safety and efficacy that physicians and patients expect. Unfortunately, few surgeons question patients regarding their use of herbal medicines, and 70% of patients do not reveal their use of herbal medicines to their physicians and pharmacists. All surgeons should question patients about the use of the following common herbal remedies, which may increase the risk of bleeding during surgical procedures: feverfew, garlic, ginger, ginkgo, and Asian ginseng. Physicians should exercise caution in prescribing retinoids or advising skin resurfacing in patients using St John's wort, which poses a risk of photosensitivity reaction. Several herbal medicines, such as aloe vera gel, contain pharmacologically active ingredients that may aid in wound healing. Practitioners who wish to recommend herbal medicines to patients should counsel them that products labeled as supplements have not been evaluated by the US Food and Drug Administration and that no guarantee of product quality can be made.",
"title": ""
},
{
"docid": "5c45aa22bb7182259f75260c879f81d6",
"text": "This paper presents an approach to parsing the Manhattan structure of an indoor scene from a single RGBD frame. The problem of recovering the floor plan is recast as an optimal labeling problem which can be solved efficiently using Dynamic Programming.",
"title": ""
},
{
"docid": "0bba0afb68f80afad03d0ba3d1ce9c89",
"text": "The Luneburg lens is an aberration-free lens that focuses light from all directions equally well. We fabricated and tested a Luneburg lens in silicon photonics. Such fully-integrated lenses may become the building blocks of compact Fourier optics on chips. Furthermore, our fabrication technique is sufficiently versatile for making perfect imaging devices on silicon platforms.",
"title": ""
},
{
"docid": "89ed5dc0feb110eb3abc102c4e50acaf",
"text": "Automatic object detection in infrared images is a vital task for many military defense systems. The high detection rate and low false detection rate of this phase directly affect the performance of the following algorithms in the system as well as the general performance of the system. In this work, a fast and robust algorithm is proposed for detection of small and high intensity objects in infrared scenes. Top-hat transformation and mean filter was used to increase the visibility of the objects, and a two-layer thresholding algorithm was introduced to calculate the object sizes more accurately. Finally, small objects extracted by using post processing methods.",
"title": ""
}
] |
scidocsrr
|
a581bf7dccc5aeda806e48bddd7d5bc4
|
Mixed Membership Stochastic Blockmodels
|
[
{
"docid": "b8f23ec8e704ee1cf9dbe6063a384b09",
"text": "The Dirichlet distribution and its compound variant, the Dirichlet-multinomial, are two of the most basic models for proportional data, such as the mix of vocabulary words in a text document. Yet the maximum-likelihood estimate of these distributions is not available in closed-form. This paper describes simple and efficient iterative schemes for obtaining parameter estimates in these models. In each case, a fixed-point iteration and a Newton-Raphson (or generalized Newton-Raphson) iteration is provided. 1 The Dirichlet distribution The Dirichlet distribution is a model of how proportions vary. Let p denote a random vector whose elements sum to 1, so that pk represents the proportion of item k. Under the Dirichlet model with parameter vector α, the probability density at p is p(p) ∼ D(α1, ..., αK) = Γ( ∑ k αk) ∏ k Γ(αk) ∏ k pk k (1) where pk > 0 (2)",
"title": ""
}
] |
[
{
"docid": "51165fba0bc57e99069caca5796398c7",
"text": "Reinforcement learning has achieved several successes in sequential decision problems. However, these methods require a large number of training iterations in complex environments. A standard paradigm to tackle this challenge is to extend reinforcement learning to handle function approximation with deep learning. Lack of interpretability and impossibility to introduce background knowledge limits their usability in many safety-critical real-world scenarios. In this paper, we study how to combine reinforcement learning and external knowledge. We derive a rule-based variant version of the Sarsa(λ) algorithm, which we call Sarsarb(λ), that augments data with complex knowledge and exploits similarities among states. We apply our method to a trading task from the Stock Market Environment. We show that the resulting algorithm leads to much better performance but also improves training speed compared to the Deep Qlearning (DQN) algorithm and the Deep Deterministic Policy Gradients (DDPG) algorithm.",
"title": ""
},
{
"docid": "339aa2d53be2cf1215caa142ad5c58d2",
"text": "A true random number generator (TRNG) is an important component in cryptographic systems. Designing a fast and secure TRNG in an FPGA is a challenging task. In this paper we analyze the TRNG designed by Sunar et al. based on XOR of the outputs of many oscillator rings. We propose an enhanced TRNG that does not require post-processing to pass statistical tests and with better randomness characteristics on the output. We have shown by experiment that the frequencies of the equal length oscillator rings in the TRNG are not identical but different due to the placement of the inverters in the FPGA. We have implemented our proposed TRNG in an Altera Cyclone II FPGA. Our implementation has passed the NIST and DIEHARD statistical tests with a throughput of 100 Mbps and with a usage of less than 100 logic elements in the FPGA.",
"title": ""
},
{
"docid": "a1bf728c54cec3f621a54ed23a623300",
"text": "Machine learning algorithms are now common in the state-ofthe-art spoken language understanding models. But to reach good performance they must be trained on a potentially large amount of data which are not available for a variety of tasks and languages of interest. In this work, we present a novel zero-shot learning method, based on word embeddings, allowing to derive a full semantic parser for spoken language understanding. No annotated in-context data are needed, the ontological description of the target domain and generic word embedding features (learned from freely available general domain data) suffice to derive the model. Two versions are studied with respect to how the model parameters and decoding step are handled, including an extension of the proposed approach in the context of conditional random fields. We show that this model, with very little supervision, can reach instantly performance comparable to those obtained by either state-of-the-art carefully handcrafted rule-based or trained statistical models for extraction of dialog acts on the Dialog State Tracking test datasets (DSTC2 and 3).",
"title": ""
},
{
"docid": "1d1651943403ba91927553d24627f5f0",
"text": "BACKGROUND\nObesity is a growing epidemic in the United States, with waistlines expanding (overweight) for almost 66% of the population (National Health and Nutrition Examination Survey 1999-2004). The attitude of society, which includes healthcare providers, toward people of size has traditionally been negative, regardless of their own gender, age, experience, and occupation. The purpose of the present study was to determine whether bariatric sensitivity training could improve nursing attitudes and beliefs toward adult obese patients and whether nurses' own body mass index (BMI) affected their attitude and belief scores.\n\n\nMETHODS\nAn on-line survey was conducted of nursing attitudes and beliefs regarding adult obese patients. The responses were compared between 1 hospital that offered bariatric sensitivity training and 1 that did not. The primary study measures were 2 scales that have been validated to assess weight bias: Attitudes Toward Obese Persons (ATOP) and Beliefs Against Obese Persons (BAOP). The primary outcome measures were the scores derived from the ATOP and BAOP scales.\n\n\nRESULTS\nData were obtained from 332 on-line surveys, to which 266 nurses responded with complete data, 145 from hospital 1 (intervention) and 121 from hospital 2 (control). The mean ATOP scores for hospital 1 were modestly greater than those for hospital 2 (18.0 versus 16.1, P = .03). However, no differences were found between the 2 hospitals for the mean BAOP scores (67.1 versus 67.1, P = .86). No statistically significant differences were found between the 2 hospitals among the BMI groups for either ATOP or BAOP. Within each hospital, no statistically significant trend was found among the BMI groups for either ATOP or BAOP. The association of BMI with the overall ATOP (r = .13, P = .04) and BOAP (r = .12, P = .05) scores was very weak, although marginally significant. The association of the overall ATOP score with the BAOP score was weak, although significant (r = .26, P < .001).\n\n\nCONCLUSION\nAnnual bariatric sensitivity training might improve nursing attitudes toward obese patients, but it does not improve nursing beliefs, regardless of the respondent's BMI.",
"title": ""
},
{
"docid": "fc2a45aa3ec8e4d27b9fc1a86d24b86d",
"text": "Information and Communication Technologies (ICT) rapidly migrate towards the Future Internet (FI) era, which is characterized, among others, by powerful and complex network infrastructures and innovative applications, services and content. An application area that attracts immense research interest is transportation. In particular, traffic congestions, emergencies and accidents reveal inefficiencies in transportation infrastructures, which can be overcome through the exploitation of ICT findings, in designing systems that are targeted at traffic / emergency management, namely Intelligent Transportation Systems (ITS). This paper considers the potential connection of vehicles to form vehicular networks that communicate with each other at an IP-based level, exchange information either directly or indirectly (e.g. through social networking applications and web communities) and contribute to a more efficient and green future world of transportation. In particular, the paper presents the basic research areas that are associated with the concept of Internet of Vehicles (IoV) and outlines the fundamental research challenges that arise there from.",
"title": ""
},
{
"docid": "fa68493c999a154dfc8638aa27255e93",
"text": "We develop a kernel density estimation method for estimating the density of points on a network and implement the method in the GIS environment. This method could be applied to, for instance, finding 'hot spots' of traffic accidents, street crimes or leakages in gas and oil pipe lines. We first show that the application of the ordinary two-dimensional kernel method to density estimation on a network produces biased estimates. Second, we formulate a 'natural' extension of the univariate kernel method to density estimation on a network, and prove that its estimator is biased; in particular, it overestimates the densities around nodes. Third, we formulate an unbiased discontinuous kernel function on a network, and fourth, an unbiased continuous kernel function on a network. Fifth, we develop computational methods for these kernels and derive their computational complexity. We also develop a plug-in tool for operating these methods in the GIS environment. Sixth, an application of the proposed methods to the density estimation of bag-snatches on streets is illustrated. Lastly, we summarize the major results and describe some suggestions for the practical use of the proposed methods.",
"title": ""
},
{
"docid": "1d0241833add973cc7cf6117735b7a1a",
"text": "This paper describes the conception and the construction of a low cost spin coating machine incorporating inexpensive electronic components and open-source technology based on Arduino platform. We present and discuss the details of the electrical, mechanical and control parts. This system will coat thin film in a micro level thickness and the microcontroller ATM 328 circuit controls and adjusts the spinning speed. We prepare thin films with good uniformity for various thicknesses by this spin coating system. The thickness and uniformity of deposited films were verified by determining electronic absorption spectra. We show that thin film thickness depends on the spin speed in the range of 2000–3500 rpm. We compare the results obtained on TiO2 layers deposited by our developed system to those grown by using a standard commercial spin coating systems.",
"title": ""
},
{
"docid": "6213ab7cc5e580d826f7e8fc3fbc72e3",
"text": "Non-orthogonal multiple access (NoMA) as an efficient way of radio resource sharing can root back to the network information theory. For generations of wireless communication systems design, orthogonal multiple access (OMA) schemes in time, frequency, or code domain have been the main choices due to the limited processing capability in the transceiver hardware, as well as the modest traffic demands in both latency and connectivity. However, for the next generation radio systems, given its vision to connect everything and the much evolved hardware capability, NoMA has been identified as a promising technology to help achieve all the targets in system capacity, user connectivity, and service latency. This article will provide a systematic overview of the state-of-the-art design of the NoMA transmission based on a unified transceiver design framework, the related standardization progress, and some promising use cases in future cellular networks, based on which the interested researchers can get a quick start in this area.",
"title": ""
},
{
"docid": "2810574ed772d8bc6cdd8e038185ec23",
"text": "The rapid growth of digital image collections has prompted the need for development of software tools that facilitate efficient searching and retrieval of images from large image databases. Towards this goal, we propose a content-based image retrieval scheme for retrieval of images via their color, texture, and shape features. Using three specialized histograms (i.e. color, wavelet, and edge histograms), we show that a more accurate representation of the underlying distribution of the image features improves the retrieval quality. Furthermore, in an attempt to better represent the user’s information needs, our system provides an interactive search mechanism through the user interface. Users searching through the database can select the visual features and adjust the associated weights according to the aspects they wish to emphasize. The proposed histogram-based scheme has been thoroughly evaluated using two general-purpose image datasets consisting of 1000 and 3000 images, respectively. Experimental results show that this scheme not only improves the effectiveness of the CBIR system, but also improves the efficiency of the overall process.",
"title": ""
},
{
"docid": "f4be6b2bf1cd462ec758fe37b098eef1",
"text": "Recent work has established an empirically successful framework for adapting learning rates for stochastic gradient descent (SGD). This effectively removes all needs for tuning, while automatically reducing learning rates over time on stationary problems, and permitting learning rates to grow appropriately in nonstationary tasks. Here, we extend the idea in three directions, addressing proper minibatch parallelization, including reweighted updates for sparse or orthogonal gradients, improving robustness on non-smooth loss functions, in the process replacing the diagonal Hessian estimation procedure that may not always be available by a robust finite-difference approximation. The final algorithm integrates all these components, has linear complexity and is hyper-parameter free.",
"title": ""
},
{
"docid": "507353e988950736e35f78185d320ce4",
"text": "Traceability is an important concern in projects that span different engineering domains. Traceability can also be mandated, exploited and managed across the engineering lifecycle, and may involve defining connections between heterogeneous models. As a result, traceability can be considered to be multi-domain. This thesis introduces the concept and challenges of multi-domain traceability and explains how it can be used to support typical traceability scenarios. It proposes a model-based approach to develop a traceability solution which effectively operates across multiple engineering domains. The approach introduced a collection of tasks and structures which address the identified challenges for a traceability solution in multi-domain projects. The proposed approach demonstrates that modelling principles and MDE techniques can help to address current challenges and consequently improve the effectiveness of a multi-domain traceability solution. A prototype of the required tooling to support the approach is implemented with EMF and atop Epsilon; it consists of an implementation of the proposed structures (models) and model management operations to support traceability. Moreover, the approach is illustrated in the context of two safety-critical projects where multi-domain traceability is required to underpin certification arguments.",
"title": ""
},
{
"docid": "49af355cfc9e13234a2a3b115f225c1b",
"text": "Tattoos play an important role in many religions. Tattoos have been used for thousands of years as important tools in ritual and tradition. Judaism, Christianity, and Islam have been hostile to the use of tattoos, but many religions, in particular Buddhism and Hinduism, make extensive use of them. This article examines their use as tools for protection and devotion.",
"title": ""
},
{
"docid": "869b991ffd62a00f993e5154af1fd8cb",
"text": "Recently mobile communication systems such as 3GPP LTE, IEEE802.16e have been growing interest in the femto-cell systems for increase of data rates and enhanced call quality. In a femto-cell system, frequent handover among femto-cells may happen. Also, the probability of handover failure may increase during the frequent handover. In this paper, we propose a load-balanced handover with adaptive hysteresis in a femto-cell system considering channel allocation status of target base station. The simulation results show that the proposed schemes reduce the ping-pong rate and improve the MS's handover-related performance in terms of handover failure probability compared with the conventional handover method in a femto-cell system.",
"title": ""
},
{
"docid": "8df1395775e139c281512e4e4c1920d9",
"text": "Over the past 20 years, breakthrough discoveries of chromatin-modifying enzymes and associated mechanisms that alter chromatin in response to physiological or pathological signals have transformed our knowledge of epigenetics from a collection of curious biological phenomena to a functionally dissected research field. Here, we provide a personal perspective on the development of epigenetics, from its historical origins to what we define as 'the modern era of epigenetic research'. We primarily highlight key molecular mechanisms of and conceptual advances in epigenetic control that have changed our understanding of normal and perturbed development.",
"title": ""
},
{
"docid": "c4f6ccec24ff18ba839a83119b125f04",
"text": "The growing rehabilitation and consumer movement toward independent community living for disabled adults has placed new demands on the health care delivery system. ProgTams must be developed for the disabled adult that provide direct training in adaptive community skills, such as banking, budgeting, consumer advocacy, personal health care, and attendant management. An Independent Living Skills Training Program that uses a psychoeducational model is described. To date, 17 multiply handicapped adults, whose average length of institutionalization was I 1.9 years, have participated in the program. Of these 17, 58.8% returned to community living and 23.5% are waiting for openings m accessible housing units.",
"title": ""
},
{
"docid": "177c52ba3d4e80274b3d90229fcce535",
"text": "We address the problem of classifying sparsely labeled networks, where labeled nodes in the network are extremely scarce. Existing algorithms, such as collective classification, have been shown to be effective for jointly deriving labels of related nodes, by exploiting class label dependencies among neighboring nodes. However, when the underlying network is sparsely labeled, most nodes have too few or even no connections to labeled nodes. This makes it very difficult to leverage supervised knowledge from labeled nodes to accurately estimate label dependencies, thereby largely degrading the classification accuracy. In this paper, we propose a novel discriminative matrix factorization (DMF) based algorithm that effectively learns a latent network representation by exploiting topological paths between labeled and unlabeled nodes, in addition to nodes' content information. The main idea is to use matrix factorization to obtain a compact representation of the network that fully encodes nodes' content information and network structure, and unleash discriminative power inferred from labeled nodes to directly benefit collective classification. To achieve this, we formulate a new matrix factorization objective function that integrates network representation learning with an empirical loss minimization for classifying node labels. An efficient optimization algorithm based on conjugate gradient methods is proposed to solve the new objective function. Experimental results on real-world networks show that DMF yields superior performance gain over the state-of-the-art baselines on sparsely labeled networks.",
"title": ""
},
{
"docid": "5d3a0b1dfdbffbd4465ad7a9bb2f6878",
"text": "The Cancer Genome Atlas (TCGA) is a public funded project that aims to catalogue and discover major cancer-causing genomic alterations to create a comprehensive \"atlas\" of cancer genomic profiles. So far, TCGA researchers have analysed large cohorts of over 30 human tumours through large-scale genome sequencing and integrated multi-dimensional analyses. Studies of individual cancer types, as well as comprehensive pan-cancer analyses have extended current knowledge of tumorigenesis. A major goal of the project was to provide publicly available datasets to help improve diagnostic methods, treatment standards, and finally to prevent cancer. This review discusses the current status of TCGA Research Network structure, purpose, and achievements.",
"title": ""
},
{
"docid": "ccc3cf21c4c97f9c56915b4d1e804966",
"text": "In this paper we present a prototype of a Microwave Imaging (MI) system for breast cancer detection. Our system is based on low-cost off-the-shelf microwave components, custom-made antennas, and a small form-factor processing system with an embedded Field-Programmable Gate Array (FPGA) for accelerating the execution of the imaging algorithm. We show that our system can compete with a vector network analyzer in terms of accuracy, and it is more than 20x faster than a high-performance server at image reconstruction.",
"title": ""
},
{
"docid": "c0d722d72955dd1ec6df3cc24289979f",
"text": "Citing classic psychological research and a smattering of recent studies, Kassin, Dror, and Kukucka (2013) proposed the operation of a forensic confirmation bias, whereby preexisting expectations guide the evaluation of forensic evidence in a self-verifying manner. In a series of studies, we tested the hypothesis that knowing that a defendant had confessed would taint people's evaluations of handwriting evidence relative to those not so informed. In Study 1, participants who read a case summary in which the defendant had previously confessed were more likely to erroneously conclude that handwriting samples from the defendant and perpetrator were authored by the same person, and were more likely to judge the defendant guilty, compared with those in a no-confession control group. Study 2 replicated and extended these findings using a within-subjects design in which participants rated the same samples both before and after reading a case summary. These findings underscore recent critiques of the forensic sciences as subject to bias, and suggest the value of insulating forensic examiners from contextual information.",
"title": ""
},
{
"docid": "813238ec00d6ee78ff9a584a152377f6",
"text": "Exercise-induced muscle injury in humans frequently occurs after unaccustomed exercise, particularly if the exercise involves a large amount of eccentric (muscle lengthening) contractions. Direct measures of exercise-induced muscle damage include cellular and subcellular disturbances, particularly Z-line streaming. Several indirectly assessed markers of muscle damage after exercise include increases in T2 signal intensity via magnetic resonance imaging techniques, prolonged decreases in force production measured during both voluntary and electrically stimulated contractions (particularly at low stimulation frequencies), increases in inflammatory markers both within the injured muscle and in the blood, increased appearance of muscle proteins in the blood, and muscular soreness. Although the exact mechanisms to explain these changes have not been delineated, the initial injury is ascribed to mechanical disruption of the fiber, and subsequent damage is linked to inflammatory processes and to changes in excitation-contraction coupling within the muscle. Performance of one bout of eccentric exercise induces an adaptation such that the muscle is less vulnerable to a subsequent bout of eccentric exercise. Although several theories have been proposed to explain this \"repeated bout effect,\" including altered motor unit recruitment, an increase in sarcomeres in series, a blunted inflammatory response, and a reduction in stress-susceptible fibers, there is no general agreement as to its cause. In addition, there is controversy concerning the presence of sex differences in the response of muscle to damage-inducing exercise. In contrast to the animal literature, which clearly shows that females experience less damage than males, research using human studies suggests that there is either no difference between men and women or that women are more prone to exercise-induced muscle damage than are men.",
"title": ""
}
] |
scidocsrr
|
8f514b69680f77c0cd9f0ab33a16e225
|
Sparse Non-negative Matrix Factorization (SNMF) based color unmixing for breast histopathological image analysis
|
[
{
"docid": "882f2fa1782d530bbc2cbccdd5a194bd",
"text": "Shape prior plays an important role in accurate and robust liver segmentation. However, liver shapes have complex variations and accurate modeling of liver shapes is challenging. Using large-scale training data can improve the accuracy but it limits the computational efficiency. In order to obtain accurate liver shape priors without sacrificing the efficiency when dealing with large-scale training data, we investigate effective and scalable shape prior modeling method that is more applicable in clinical liver surgical planning system. We employed the Sparse Shape Composition (SSC) to represent liver shapes by an optimized sparse combination of shapes in the repository, without any assumptions on parametric distributions of liver shapes. To leverage large-scale training data and improve the computational efficiency of SSC, we also introduced a homotopy-based method to quickly solve the L1-norm optimization problem in SSC. This method takes advantage of the sparsity of shape modeling, and solves the original optimization problem in SSC by continuously transforming it into a series of simplified problems whose solution is fast to compute. When new training shapes arrive gradually, the homotopy strategy updates the optimal solution on the fly and avoids re-computing it from scratch. Experiments showed that SSC had a high accuracy and efficiency in dealing with complex liver shape variations, excluding gross errors and preserving local details on the input liver shape. The homotopy-based SSC had a high computational efficiency, and its runtime increased very slowly when repository's capacity and vertex number rose to a large degree. When repository's capacity was 10,000, with 2000 vertices on each shape, homotopy method cost merely about 11.29 s to solve the optimization problem in SSC, nearly 2000 times faster than interior point method. The dice similarity coefficient (DSC), average symmetric surface distance (ASD), and maximum symmetric surface distance measurement was 94.31 ± 3.04%, 1.12 ± 0.69 mm and 3.65 ± 1.40 mm respectively.",
"title": ""
},
{
"docid": "1de10e40580ba019045baaa485f8e729",
"text": "Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.",
"title": ""
}
] |
[
{
"docid": "4c410bb0390cc4611da4df489c89fca0",
"text": "In this work, we propose a generalized product of experts (gPoE) framework for combining the predictions of multiple probabilistic models. We identify four desirable properties that are important for scalability, expressiveness and robustness, when learning and inferring with a combination of multiple models. Through analysis and experiments, we show that gPoE of Gaussian processes (GP) have these qualities, while no other existing combination schemes satisfy all of them at the same time. The resulting GP-gPoE is highly scalable as individual GP experts can be independently learned in parallel; very expressive as the way experts are combined depends on the input rather than fixed; the combined prediction is still a valid probabilistic model with natural interpretation; and finally robust to unreliable predictions from individual experts.",
"title": ""
},
{
"docid": "fb9c0650f5ac820eef3df65b7de1ff12",
"text": "Since 2013, a number of studies have enhanced the literature and have guided clinicians on viable treatment interventions outside of pharmacotherapy and surgery. Thirty-three randomized controlled trials and one large observational study on exercise and physiotherapy were published in this period. Four randomized controlled trials focused on dance interventions, eight on treatment of cognition and behavior, two on occupational therapy, and two on speech and language therapy (the latter two specifically addressed dysphagia). Three randomized controlled trials focused on multidisciplinary care models, one study on telemedicine, and four studies on alternative interventions, including music therapy and mindfulness. These studies attest to the marked interest in these therapeutic approaches and the increasing evidence base that places nonpharmacological treatments firmly within the integrated repertoire of treatment options in Parkinson's disease.",
"title": ""
},
{
"docid": "605e478250d1c49107071e47a9cb00df",
"text": "In line with the increasing use of sensors and health application, there are huge efforts on processing of collected data to extract valuable information such as accelerometer data. This study will propose activity recognition model aim to detect the activities by employing ensemble of classifiers techniques using the Wireless Sensor Data Mining (WISDM). The model will recognize six activities namely walking, jogging, upstairs, downstairs, sitting, and standing. Many experiments are conducted to determine the best classifier combination for activity recognition. An improvement is observed in the performance when the classifiers are combined than when used individually. An ensemble model is built using AdaBoost in combination with decision tree algorithm C4.5. The model effectively enhances the performance with an accuracy level of 94.04 %. Keywords—Activity Recognition; Sensors; Smart phones; accelerometer data; Data mining; Ensemble",
"title": ""
},
{
"docid": "63e58ac7e6f3b4a463e8f8182fee9be5",
"text": "In this work, we propose “global style tokens” (GSTs), a bank of embeddings that are jointly trained within Tacotron, a state-of-the-art end-toend speech synthesis system. The embeddings are trained with no explicit labels, yet learn to model a large range of acoustic expressiveness. GSTs lead to a rich set of significant results. The soft interpretable “labels” they generate can be used to control synthesis in novel ways, such as varying speed and speaking style – independently of the text content. They can also be used for style transfer, replicating the speaking style of a single audio clip across an entire long-form text corpus. When trained on noisy, unlabeled found data, GSTs learn to factorize noise and speaker identity, providing a path towards highly scalable but robust speech synthesis.",
"title": ""
},
{
"docid": "3cdbc153caaafcea54228b0c847aa536",
"text": "BACKGROUND\nAlthough the use of filling agents for soft-tissue augmentation has increased worldwide, most consensus statements do not distinguish between ethnic populations. There are, however, significant differences between Caucasian and Asian faces, reflecting not only cultural disparities, but also distinctive treatment goals. Unlike aesthetic patients in the West, who usually seek to improve the signs of aging, Asian patients are younger and request a broader range of indications.\n\n\nMETHODS\nMembers of the Asia-Pacific Consensus group-comprising specialists from the fields of dermatology, plastic surgery, anatomy, and clinical epidemiology-convened to develop consensus recommendations for Asians based on their own experience using cohesive polydensified matrix, hyaluronic acid, and calcium hydroxylapatite fillers.\n\n\nRESULTS\nThe Asian face demonstrates differences in facial structure and cosmetic ideals. Improving the forward projection of the \"T zone\" (i.e., forehead, nose, cheeks, and chin) forms the basis of a safe and effective panfacial approach to the Asian face. Successful augmentation may be achieved with both (1) high- and low-viscosity cohesive polydensified matrix/hyaluronic acid and (2) calcium hydroxylapatite for most indications, although some constraints apply.\n\n\nCONCLUSION\nThe Asia-Pacific Consensus recommendations are the first developed specifically for the use of fillers in Asian populations.\n\n\nCLINCIAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, V.",
"title": ""
},
{
"docid": "a0e243a0edd585303a84fda47b1ae1e1",
"text": "Generative Adversarial Networks (GANs) have shown great promise recently in image generation. Training GANs for language generation has proven to be more difficult, because of the non-differentiable nature of generating text with recurrent neural networks. Consequently, past work has either resorted to pre-training with maximum-likelihood or used convolutional networks for generation. In this work, we show that recurrent neural networks can be trained to generate text with GANs from scratch using curriculum learning, by slowly teaching the model to generate sequences of increasing and variable length. We empirically show that our approach vastly improves the quality of generated sequences compared to a convolutional baseline. 1",
"title": ""
},
{
"docid": "45bd038dd94d388f945c041e7c04b725",
"text": "Entomophagy is widespread among nonhuman primates and is common among many human communities. However, the extent and patterns of entomophagy vary substantially both in humans and nonhuman primates. Here we synthesize the literature to examine why humans and other primates eat insects and what accounts for the variation in the extent to which they do so. Variation in the availability of insects is clearly important, but less understood is the role of nutrients in entomophagy. We apply a multidimensional analytical approach, the right-angled mixture triangle, to published data on the macronutrient compositions of insects to address this. Results showed that insects eaten by humans spanned a wide range of protein-to-fat ratios but were generally nutrient dense, whereas insects with high protein-to-fat ratios were eaten by nonhuman primates. Although suggestive, our survey exposes a need for additional, standardized, data.",
"title": ""
},
{
"docid": "939cd6055f850b8fdb6ba869d375cf25",
"text": "...although PPP lessons are often supplemented with skills lessons, most students taught mainly through conventional approaches such as PPP leave school unable to communicate effectively in English (Stern, 1983). This situation has prompted many ELT professionals to take note of... second language acquisition (SLA) studies... and turn towards holistic approaches where meaning is central and where opportunities for language use abound. Task-based learning is one such approach...",
"title": ""
},
{
"docid": "e623ce85fdeead09fa746e9ae793806e",
"text": "In this paper, we aim to construct a deep neural network which embeds high dimensional symmetric positive definite (SPD) matrices into a more discriminative low dimensional SPD manifold. To this end, we develop two types of basic layers: a 2D fully connected layer which reduces the dimensionality of the SPD matrices, and a symmetrically clean layer which achieves non-linear mapping. Specifically, we extend the classical fully connected layer such that it is suitable for SPD matrices, and we further show that SPD matrices with symmetric pair elements setting zero operations are still symmetric positive definite. Finally, we complete the construction of the deep neural network for SPD manifold learning by stacking the two layers. Experiments on several face datasets demonstrate the effectiveness of the proposed method. Introduction Symmetric positive definite (SPD) matrices have shown powerful representation abilities of encoding image and video information. In computer vision community, the SPD matrix representation has been widely employed in many applications, such as face recognition (Pang, Yuan, and Li 2008; Huang et al. 2015; Wu et al. 2015; Li et al. 2015), object recognition (Tuzel, Porikli, and Meer 2006; Jayasumana et al. 2013; Harandi, Salzmann, and Hartley 2014; Yin et al. 2016), action recognition (Harandi et al. 2016), and visual tracking (Wu et al. 2015). The SPD matrices form a Riemannian manifold, where the Euclidean distance is no longer a suitable metric. Previous works on analyzing the SPD manifold mainly fall into two categories: the local approximation method and the kernel method, as shown in Figure 1(a). The local approximation method (Tuzel, Porikli, and Meer 2006; Sivalingam et al. 2009; Tosato et al. 2010; Carreira et al. 2012; Vemulapalli and Jacobs 2015) locally flattens the manifold and approximates the SPD matrix by a point of the tangent space. The kernel method (Harandi et al. 2012; Wang et al. 2012; Jayasumana et al. 2013; Li et al. 2013; Quang, San Biagio, and Murino 2014; Yin et al. 2016) embeds the manifold into a higher dimensional Reproducing Kernel Hilbert Space (RKHS) via kernel functions. On new ∗corresponding author Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. SPD Manifold Tangent Space",
"title": ""
},
{
"docid": "90e5fc05d96e84668816eb70a06ab709",
"text": "This paper introduces a cooperative parallel metaheuristic for solving the capacitated vehicle routing problem. The proposed metaheuristic consists of multiple parallel tabu search threads that cooperate by asynchronously exchanging best found solutions through a common solution pool. The solutions sent to the pool are clustered according to their similarities. The search history information identified from the solution clusters is applied to guide the intensification or diversification of the tabu search threads. Computational experiments on two sets of large scale benchmarks from the literature demonstrate that the suggested metaheuristic is highly competitive, providing new best solutions to ten of those well-studied instances.",
"title": ""
},
{
"docid": "8ec018e0fc4ca7220387854bdd034a58",
"text": "Despite the overwhelming success of deep learning in various speech processing tasks, the problem of separating simultaneous speakers in a mixture remains challenging. Two major difficulties in such systems are the arbitrary source permutation and unknown number of sources in the mixture. We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source. Attractor points in this study are created by finding the centroids of the sources in the embedding space, which are subsequently used to determine the similarity of each bin in the mixture to each source. The network is then trained to minimize the reconstruction error of each source by optimizing the embeddings. The proposed model is different from prior works in that it implements an end-to-end training, and it does not depend on the number of sources in the mixture. Two strategies are explored in the test time, K-means and fixed attractor points, where the latter requires no post-processing and can be implemented in real-time. We evaluated our system on Wall Street Journal dataset and show 5.49% improvement over the previous state-of-the-art methods.",
"title": ""
},
{
"docid": "a45109840baf74c61b5b6b8f34ac81d5",
"text": "Decision-making groups can potentially benefit from pooling members' information, particularly when members individually have partial and biased information but collectively can compose an unbiased characterization of the decision alternatives. The proposed biased sampling model of group discussion, however, suggests that group members often fail to effectively pool their information because discussion tends to be dominated by (a) information that members hold in common before discussion and (b) information that supports members' existent preferences. In a political caucus simulation, group members individually read candidate descriptions that contained partial information biased against the most favorable candidate and then discussed the candidates as a group. Even though groups could have produced unbiased composites of the candidates through discussion, they decided in favor of the candidate initially preferred by a plurality rather than the most favorable candidate. Group members' preand postdiscussion recall of candidate attributes indicated that discussion tended to perpetuate, not to correct, members' distorted pictures of the candidates.",
"title": ""
},
{
"docid": "a33d982b4dde7c22ffc3c26214b35966",
"text": "Background: In most cases, bug resolution is a collaborative activity among developers in software development where each developer contributes his or her ideas on how to resolve the bug. Although only one developer is recorded as the actual fixer for the bug, the contribution of the developers who participated in the collaboration cannot be neglected.\n Aims: This paper proposes a new approach, called DRETOM (Developer REcommendation based on TOpic Models), to recommending developers for bug resolution in collaborative behavior.\n Method: The proposed approach models developers' interest in and expertise on bug resolving activities based on topic models that are built from their historical bug resolving records. Given a new bug report, DRETOM recommends a ranked list of developers who are potential to participate in and contribute to resolving the new bug according to these developers' interest in and expertise on resolving it.\n Results: Experimental results on Eclipse JDT and Mozilla Firefox projects show that DRETOM can achieve high recall up to 82% and 50% with top 5 and top 7 recommendations respectively.\n Conclusion: Developers' interest in bug resolving activities should be taken into consideration. On condition that the parameter θ of DRETOM is set properly with trials, the proposed approach is practically useful in terms of recall.",
"title": ""
},
{
"docid": "b63077105e140546a7485167339fdf62",
"text": "Deep multi-layer perceptron neural networks are used in many state-of-the-art systems for machine perception (e.g., speech-to-text, image classification, and object detection). Once a network is trained to do a specific task, e.g., finegrained bird classification, it cannot easily be trained to do new tasks, e.g., incrementally learning to recognize additional bird species or learning an entirely different task such as finegrained flower recognition. When new tasks are added, deep neural networks are prone to catastrophically forgetting previously learned information. Catastrophic forgetting has hindered the use of neural networks in deployed applications that require lifelong learning. There have been multiple attempts to develop schemes that mitigate catastrophic forgetting, but these methods have yet to be compared and the kinds of tests used to evaluate individual methods vary greatly. In this paper, we compare multiple mechanisms designed to mitigate catastrophic forgetting in neural networks. Experiments showed that the mechanism(s) that are critical for optimal performance vary based on the incremental training paradigm and type of data being used.",
"title": ""
},
{
"docid": "e6d79e4a616c4913b605bc3c2a6f8776",
"text": "In this paper an alternative approach to solve uncertain Stochastic Differential Equation (SDE) is proposed. This uncertainty occurs due to the involved parameters in system and these are considered as Triangular Fuzzy Numbers (TFN). Here the proposed fuzzy arithmetic in [2] is used as a tool to handle Fuzzy Stochastic Differential Equation (FSDE). In particular, a system of Ito stochastic differential equations is analysed with fuzzy parameters. Further exact and Euler Maruyama approximation methods with fuzzy values are demonstrated and solved some standard SDE.",
"title": ""
},
{
"docid": "32e01378d68ae1610f538e60edf24d9a",
"text": "Generating texts from structured data (e.g., a table) is important for various natural language processing tasks such as question answering and dialog systems. In recent studies, researchers use neural language models and encoder-decoder frameworks for table-to-text generation. However, these neural network-based approaches typically do not model the order of content during text generation. When a human writes a summary based on a given table, he or she would probably consider the content order before wording. In this paper, we propose an order-planning text generation model, where order information is explicitly captured by link-based attention. Then a self-adaptive gate combines the link-based attention with traditional content-based attention. We conducted experiments on the WIKIBIO dataset and achieve higher performance than previous methods in terms of BLEU, ROUGE, and NIST scores; we also performed ablation tests to analyze each component of our model.",
"title": ""
},
{
"docid": "228b94be5c79161343376360cd35db6f",
"text": "Linked Data is proclaimed as the Semantic Web done right. The Semantic Web is an incomplete dream so far, but a homogeneous revolutionary platform as a network of Blockchains could be the solution to this not optimal reality. This research paper introduces some initial hints and ideas about how a futuristic Internet that might be composed and powered by Blockchains networks would be constructed and designed to interconnect data and meaning, thus allow reasoning. An industrial application where Blockchain and Linked Data fits perfectly as a Supply Chain management system is also researched.",
"title": ""
},
{
"docid": "43cdcbfaca6c69cdb8652761f7e8b140",
"text": "Aggregation of local features is a well-studied approach for image as well as 3D model retrieval (3DMR). A carefully designed local 3D geometric feature is able to describe detailed local geometry of 3D model, often with invariance to geometric transformations that include 3D rotation of local 3D regions. For efficient 3DMR, these local features are aggregated into a feature per 3D model. A recent alternative, end-toend 3D Deep Convolutional Neural Network (3D-DCNN) [7][33], has achieved accuracy superior to the abovementioned aggregation-of-local-features approach. However, current 3D-DCNN based methods have weaknesses; they lack invariance against 3D rotation, and they often miss detailed geometrical features due to their quantization of shapes into coarse voxels in applying 3D-DCNN. In this paper, we propose a novel deep neural network for 3DMR called Deep Local feature Aggregation Network (DLAN) that combines extraction of rotation-invariant 3D local features and their aggregation in a single deep architecture. The DLAN describes local 3D regions of a 3D model by using a set of 3D geometric features invariant to local rotation. The DLAN then aggregates the set of features into a (global) rotation-invariant and compact feature per 3D model. Experimental evaluation shows that the DLAN outperforms the existing deep learning-based 3DMR algorithms.",
"title": ""
},
{
"docid": "cc5b1a8100e8d4d7be5dfb80c4866aab",
"text": "A fundamental characteristic of multicellular organisms is the specialization of functional cell types through the process of differentiation. These specialized cell types not only characterize the normal functioning of different organs and tissues, they can also be used as cellular biomarkers of a variety of different disease states and therapeutic/vaccine responses. In order to serve as a reference for cell type representation, the Cell Ontology has been developed to provide a standard nomenclature of defined cell types for comparative analysis and biomarker discovery. Historically, these cell types have been defined based on unique cellular shapes and structures, anatomic locations, and marker protein expression. However, we are now experiencing a revolution in cellular characterization resulting from the application of new high-throughput, high-content cytometry and sequencing technologies. The resulting explosion in the number of distinct cell types being identified is challenging the current paradigm for cell type definition in the Cell Ontology. In this paper, we provide examples of state-of-the-art cellular biomarker characterization using high-content cytometry and single cell RNA sequencing, and present strategies for standardized cell type representations based on the data outputs from these cutting-edge technologies, including “context annotations” in the form of standardized experiment metadata about the specimen source analyzed and marker genes that serve as the most useful features in machine learning-based cell type classification models. We also propose a statistical strategy for comparing new experiment data to these standardized cell type representations. The advent of high-throughput/high-content single cell technologies is leading to an explosion in the number of distinct cell types being identified. It will be critical for the bioinformatics community to develop and adopt data standard conventions that will be compatible with these new technologies and support the data representation needs of the research community. The proposals enumerated here will serve as a useful starting point to address these challenges.",
"title": ""
},
{
"docid": "199079ff97d1a48819f8185c2ef23472",
"text": "Identifying domain-dependent opinion words is a key problem in opinion mining and has been studied by several researchers. However, existing work has been focused on adjectives and to some extent verbs. Limited work has been done on nouns and noun phrases. In our work, we used the feature-based opinion mining model, and we found that in some domains nouns and noun phrases that indicate product features may also imply opinions. In many such cases, these nouns are not subjective but objective. Their involved sentences are also objective sentences and imply positive or negative opinions. Identifying such nouns and noun phrases and their polarities is very challenging but critical for effective opinion mining in these domains. To the best of our knowledge, this problem has not been studied in the literature. This paper proposes a method to deal with the problem. Experimental results based on real-life datasets show promising results.",
"title": ""
}
] |
scidocsrr
|
80d5a1ee4c177058910ee7a708fe8dc3
|
Camera Model Identification Based on the Heteroscedastic Noise Model
|
[
{
"docid": "8055b2c65d5774000fe4fa81ff83efb7",
"text": "Changes in measured image irradiance have many physical causes and are the primary cue for several visual processes, such as edge detection and shape from shading. Using physical models for charged-coupled device ( C C D ) video cameras and material reflectance, we quantify the variation in digitized pixel values that is due to sensor noise and scene variation. This analysis forms the basis of algorithms for camera characterization and calibration and for scene description. Specifically, algorithms are developed for estimating the parameters of camera noise and for calibrating a camera to remove the effects of fixed pattern nonuniformity and spatial variation in dark current. While these techniques have many potential uses, we describe in particular how they can be used to estimate a measure of scene variation. This measure is independent of image irradiance and can be used to identify a surface from a single sensor band over a range of situations. Experimental results confirm that the models presented in this paper are useful for modeling the different sources of variation in real images obtained from video cameras. Index T e m s C C D cameras, computer vision, camera calibration, noise estimation, reflectance variation, sensor modeling.",
"title": ""
}
] |
[
{
"docid": "783c347d3d4f5a191508f005b362164b",
"text": "Workspace awareness is knowledge about others’ interaction with a shared workspace. Groupware systems provide only limited information about other participants, often compromising workspace awareness. This paper describes a usability study of several widgets designed to help maintain awareness in a groupware workspace. These widgets include a miniature view, a radar view, a multiuser scrollbar, a glance function, and a “what you see is what I do” view. The study examined the widgets’ information content, how easily people could interpret them, and whether they were useful or distracting. Observations, questionnaires, and interviews indicate that the miniature and radar displays are useful and valuable for tasks involving spatial manipulation of artifacts.",
"title": ""
},
{
"docid": "331391539cd5a226e9389f96f815fa0d",
"text": "Understanding protein function from amino acid sequence is a fundamental problem in biology. In this project, we explore how well we can represent biological function through examination of raw sequence alone. Using a large corpus of protein sequences and their annotated protein families, we learn dense vector representations for amino acid sequences using the co-occurrence statistics of short fragments. Then, using this representation, we experiment with several neural network architectures to train classifiers for protein family identification. We show good performance for a multi-class prediction problem with 589 protein family classes.",
"title": ""
},
{
"docid": "b94e096ea1bc990bd7c72aab988dd5ff",
"text": "The paper describes the design and implementation of an independent, third party contract monitoring service called Contract Compliance Checker (CCC). The CCC is provided with the specification of the contract in force, and is capable of observing and logging the relevant business-to-business (B2B) interaction events, in order to determine whether the actions of the business partners are consistent with the contract. A contract specification language called EROP (for Events, Rights, Obligations and Prohibitions) for the CCC has been developed based on business rules, that provides constructs to specify what rights, obligation and prohibitions become active and inactive after the occurrence of events related to the execution of business operations. The system has been designed to work with B2B industry standards such as ebXML and RosettaNet.",
"title": ""
},
{
"docid": "9b917dde9a9f9dcf8ed74fd0bb3a07cf",
"text": "We describe an ELECTRONIC SPEAKING GLOVE, designed to facilitate an easy communication through synthesized speech for the benefit of speechless patients. Generally, a speechless person communicates through sign language which is not understood by the majority of people. This final year project is designed to solve this problem. Gestures of fingers of a user of this glove will be converted into synthesized speech to convey an audible message to others, for example in a critical communication with doctors. The glove is internally equipped with multiple flex sensors that are made up of “bend-sensitive resistance elements”. For each specific gesture, internal flex sensors produce a proportional change in resistance of various elements. The processing of this information sends a unique set of signals to the AVR (Advance Virtual RISC) microcontroller which is preprogrammed to speak desired sentences.",
"title": ""
},
{
"docid": "7f8ee14d2d185798c3864178bd450f3d",
"text": "In this paper, a new sensing device that can simultaneously monitor traffic congestion and urban flash floods is presented. This sensing device is based on the combination of passive infrared sensors (PIRs) and ultrasonic rangefinder, and is used for real-time vehicle detection, classification, and speed estimation in the context of wireless sensor networks. This framework relies on dynamic Bayesian Networks to fuse heterogeneous data both spatially and temporally for vehicle detection. To estimate the speed of the incoming vehicles, we first use cross correlation and wavelet transform-based methods to estimate the time delay between the signals of different sensors. We then propose a calibration and self-correction model based on Bayesian Networks to make a joint inference by all sensors about the speed and the length of the detected vehicle. Furthermore, we use the measurements of the ultrasonic and the PIR sensors to perform vehicle classification. Validation data (using an experimental dual infrared and ultrasonic traffic sensor) show a 99% accuracy in vehicle detection, a mean error of 5 kph in vehicle speed estimation, a mean error of 0.7m in vehicle length estimation, and a high accuracy in vehicle classification. Finally, we discuss the computational performance of the algorithm, and show that this framework can be implemented on low-power computational devices within a wireless sensor network setting. Such decentralized processing greatly improves the energy consumption of the system and minimizes bandwidth usage.",
"title": ""
},
{
"docid": "3ed0e387f8e6a8246b493afbb07a9312",
"text": "Van den Ende-Gupta Syndrome (VDEGS) is an autosomal recessive disorder characterized by blepharophimosis, distinctive nose, hypoplastic maxilla, and skeletal abnormalities. Using homozygosity mapping in four VDEGS patients from three consanguineous families, Anastacio et al. [Anastacio et al. (2010); Am J Hum Genet 87:553-559] identified homozygous mutations in SCARF2, located at 22q11.2. Bedeschi et al. [2010] described a VDEGS patient with sclerocornea and cataracts with compound heterozygosity for the common 22q11.2 microdeletion and a hemizygous SCARF2 mutation. Because sclerocornea had been described in DiGeorge-velo-cardio-facial syndrome but not in VDEGS, they suggested that the ocular abnormalities were caused by the 22q11.2 microdeletion. We report on a 23-year-old male who presented with bilateral sclerocornea and the VDGEGS phenotype who was subsequently found to be homozygous for a 17 bp deletion in exon 4 of SCARF2. The occurrence of bilateral sclerocornea in our patient together with that of Bedeschi et al., suggests that the full VDEGS phenotype may include sclerocornea resulting from homozygosity or compound heterozygosity for loss of function variants in SCARF2.",
"title": ""
},
{
"docid": "b9546d8f52b19ba99bb9c8f4dc62f2bd",
"text": "One of the main unresolved problems that arise during the data mining process is treating data that contains temporal information. In this case, a complete understanding of the entire phenomenon requires that the data should be viewed as a sequence of events. Temporal sequences appear in a vast range of domains, from engineering, to medicine and finance, and the ability to model and extract information from them is crucial for the advance of the information society. This paper provides a survey on the most significant techniques developed in the past ten years to deal with temporal sequences.",
"title": ""
},
{
"docid": "2085662af2d74d31756674bac9e6a2a7",
"text": "Deep Learning (DL) algorithms have become the de facto choice for data analysis. Several DL implementations – primarily limited to a single compute node – such as Caffe, TensorFlow, Theano and Torch have become readily available. Distributed DL implementations capable of execution on large scale systems are becoming important to address the computational needs of large data produced by scientific simulations and experiments. Yet, the adoption of distributed DL implementations faces significant impediments: 1) most implementations require DL analysts to modify their code significantly – which is a showstopper, 2) several distributed DL implementations are geared towards cloud computing systems – which is inadequate for execution on massively parallel systems such as supercomputers. This work addresses each of these problems. We provide a distributed memory DL implementation by incorporating required changes in the TensorFlow runtime itself. This dramatically reduces the entry barrier for using a distributed TensorFlow implementation. We use Message Passing Interface (MPI) – which provides performance portability, especially since MPI specific changes are abstracted from users. Lastly – and arguably most importantly – we make our implementation available for broader use, under the umbrella of Machine Learning Toolkit for Extreme Scale (MaTEx) at http://hpc.pnl.gov/matex. We refer to our implementation as MaTEx-TensorFlow.",
"title": ""
},
{
"docid": "8840e9e1e304a07724dd6e6779cfc9c4",
"text": "Clustering has become an increasingly important task in modern application domains such as marketing and purchasing assistance, multimedia, molecular biology as well as many others. In most of these areas, the data are originally collected at different sites. In order to extract information from these data, they are merged at a central site and then clustered. In this paper, we propose a different approach. We cluster the data locally and extract suitable representatives from these clusters. These representatives are sent to a global server site where we restore the complete clustering based on the local representatives. This approach is very efficient, because the local clustering can be carried out quickly and independently from each other. Furthermore, we have low transmission cost, as the number of transmitted representatives is much smaller than the cardinality of the complete data set. Based on this small number of representatives, the global clustering can be done very efficiently. For both the local and the global clustering, we use a density based clustering algorithm. The combination of both the local and the global clustering forms our new DBDC (Density Based Distributed Clustering) algorithm. Furthermore, we discuss the complex problem of finding a suitable quality measure for evaluating distributed clusterings. We introduce two quality criteria which are compared to each other and which allow us to evaluate the quality of our DBDC algorithm. In our experimental evaluation, we will show that we do not have to sacrifice clustering quality in order to gain an efficiency advantage when using our distributed clustering approach.",
"title": ""
},
{
"docid": "17fcb38734d6525f2f0fa3ee6c313b43",
"text": "The increasing generation and collection of personal data h as created a complex ecosystem, often collaborative but som etimes combative, around companies and individuals engaging in th e use of these data. We propose that the interactions between these agents warrants a new topic of study: Human-Data Inter action (HDI). In this paper we discuss how HDI sits at the intersection of various disciplines, including computer s cience, statistics, sociology, psychology and behavioura l economics. We expose the challenges that HDI raises, organised into thr ee core themes of legibility, agency and negotiability, and we present the HDI agenda to open up a dialogue amongst interest ed parties in the personal and big data ecosystems.",
"title": ""
},
{
"docid": "72108944c9dfbb4a50da07aea41d22f5",
"text": "This study examined the perception of drug abuse amongst Nigerian undergraduates living off-campus. Students were surveyed at the Lagos State University, Ojo, allowing for a diverse sample that included a large percentage of the students from different faculties and departments. The undergraduate students were surveyed with a structured self-reporting anonymous questionnaire modified and adapted from the WHO student drug survey proforma. Of the 1000 students surveyed, a total of 807 responded to the questionnaire resulting in 80.7% response rate. Majority (77.9%) of the students were aged 19-30 years and unmarried. Six hundred and ninety eight (86.5%) claimed they were aware of drug abuse, but contrarily they demonstrated poor knowledge and awareness. Marijuana, 298 (45.7%) was the most common drug of abuse seen by most of the students. They were unable to identify very well the predisposing factors to drug use and the attending risks. Two hundred and sixty six (33.0%) students were currently taking one or more drugs of abuse. Coffee (43.1%) was the most commonly used drug, followed by alcohol (25.8%) and marijuana (7.4%). Despite chronic use of these drugs (5 years and above), addiction is not a common finding. The study also revealed the poor attitudes of the undergraduates to drug addicts even after rehabilitation. It was therefore concluded that the awareness, knowledge, practices and attitudes of Nigerian undergraduates towards drug abuse is very poor. Considerably more research is needed to develop effective prevention strategy that combines school-based interventions with those affecting the family, social institutions and the larger community.",
"title": ""
},
{
"docid": "1f700c0c55b050db7c760f0c10eab947",
"text": "Cathy O’Neil’s Weapons of Math Destruction is a timely reminder of the power and perils of predictive algorithms and model-driven decision processes. The book deals in some depth with eight case studies of the abuses she associates with WMDs: “weapons of math destruction.” The cases include the havoc wrought by value-added models used to evaluate teacher performance and by the college ranking system introduced by U.S. News and World Report; the collateral damage of online advertising and models devised to track and monetize “eyeballs”; the abuses associated with the recidivism models used in judicial decisions; the inequities perpetrated by the use of personality tests in hiring decisions; the burdens placed on low-wage workers by algorithm-driven attempts to maximize labor efficiency; the injustices written into models that evaluate creditworthiness; the inequities produced by insurance companies’ risk models; and the potential assault on the democratic process by the use of big data in political campaigns. As this summary suggests, O’Neil had plenty of examples to choose from when she wrote the book, but since the publication of Weapons of Math Destruction, two more problems associated with model-driven decision procedures have surfaced, making O’Neil’s work even more essential reading. The first—the role played by fake news, much of it circulated on Facebook, in the 2016 election—has led to congressional investigations. The second—the failure of algorithm-governed oversight to recognize and delete gruesome posts on the Facebook Live streaming service—has caused CEO Mark Zuckerberg to announce the addition of 3,000 human screeners to the Facebook staff. While O’Neil’s book may seem too polemical to some readers and too cautious to others, it speaks forcefully to the cultural moment we share. O’Neil weaves the story of her own credentials and work experience into her analysis, because, as she explains, her training as a mathematician and her experience in finance shaped the way she now understands the world. O’Neil earned a PhD in mathematics from Harvard; taught at Barnard College, where her research area was algebraic number theory; and worked for the hedge fund D. E. Shaw, which uses mathematical analysis to guide investment decisions. When the financial crisis of 2008 revealed that even the most sophisticated models were incapable of anticipating risks associated with “black swans”—events whose rarity make them nearly impossible to predict—O’Neil left the world of corporate finance to join the RiskMetrics Group, where she helped market risk models to financial institutions eager to rehabilitate their image. Ultimately, she became disillusioned with the financial industry’s refusal to take seriously the limitations of risk management models and left RiskMetrics. She rebranded herself a “data scientist” and took a job at Intent Media, where she helped design algorithms that would make big data useful for all kinds of applications. All the while, as O’Neil describes it, she “worried about the separation between technical models and real people, and about the moral repercussions of that separation” (page 48). O’Neil eventually left Intent Media to devote her energies to inWeapons of Math Destruction",
"title": ""
},
{
"docid": "d950407cfcbc5457b299e05c8352107e",
"text": "Pedicle screw instrumentation in AIS has advantages of rigid fixation, improved deformity correction and a shorter fusion, but needs an exacting technique. The author has been using the K-wire method with intraoperative single PA and lateral radiographs, because it is safe, accurate and fast. Pedicle screws are inserted in every segment on the correction side (thoracic concave) and every 2–3 on the supportive side (thoracic convex). After an over-bent rod is inserted on the corrective side, the rod is rotated 90° counterclockwise. This maneuver corrects the coronal and sagittal curves. Then the vertebra is derotated by direct vertebral rotation (DVR) correcting the rotational deformity. The direction of DVR should be opposite to that of the vertebral rotation. A rigid rod has to be used to prevent the rod from straightening out during the rod derotation and DVR. The ideal classification of AIS should address all curve patterns, predicts accurate fusion extent and have good inter/intraobserver reliability. The Suk classification matches the ideal classification is simple and memorable, and has only four structural curve patterns; single thoracic, double thoracic, double major and thoracolumbar/lumbar. Each curve has two types, A and B. When using pedicle screws in thoracic AIS, curves are usually fused from upper neutral to lower neutral vertebra. Identification of the end vertebra and the neutral vertebra is important in deciding the fusion levels and the direction of DVR. In lumbar AIS, fusion is performed from upper neutral vertebra to L3 or L4 depending on its curve types. Rod derotation and DVR using pedicle screw instrumentation give true three dimensional deformity correction in the treatment of AIS. Suk classification with these methods predicts exact fusion extent and is easy to understand and remember.",
"title": ""
},
{
"docid": "feb672a16dd86db24e8d3700cf507bf9",
"text": "In this paper we propose an efficient method to calculate a highquality depth map from a single raw image captured by a light field or plenoptic camera. The proposed model combines the main idea of Active Wavefront Sampling (AWS) with the light field technique, i.e. we extract so-called sub-aperture images out of the raw image of a plenoptic camera, in such a way that the virtual view points are arranged on circles around a fixed center view. By tracking an imaged scene point over a sequence of sub-aperture images corresponding to a common circle, one can observe a virtual rotation of the scene point on the image plane. Our model is able to measure a dense field of these rotations, which are inversely related to the scene depth.",
"title": ""
},
{
"docid": "8996068836559be2b253cd04aeaa285b",
"text": "We present AutonoVi-Sim, a novel high-fidelity simulation platform for autonomous driving data generation and driving strategy testing. AutonoVi-Sim is a collection of high-level extensible modules which allows the rapid development and testing of vehicle configurations and facilitates construction of complex traffic scenarios. Autonovi-Sim supports multiple vehicles with unique steering or acceleration limits, as well as unique tire parameters and dynamics profiles. Engineers can specify the specific vehicle sensor systems and vary time of day and weather conditions to generate robust data and gain insight into how conditions affect the performance of a particular algorithm. In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians, allowing engineers to specify routes for these actors, or to create scripted scenarios which place the vehicle in dangerous reactive situations. Autonovi-Sim facilitates training of deep-learning algorithms by enabling data export from the vehicle's sensors, including camera data, LIDAR, relative positions of traffic participants, and detection and classification results. Thus, AutonoVi-Sim allows for the rapid prototyping, development and testing of autonomous driving algorithms under varying vehicle, road, traffic, and weather conditions. In this paper, we detail the simulator and provide specific performance and data benchmarks.",
"title": ""
},
{
"docid": "cf0d0d6895a5e5fbe1eb72e82b4d8b4b",
"text": "PURPOSE\nThe purpose of this study was twofold: (a) to determine the prevalence of compassion satisfaction, compassion fatigue, and burnout in emergency department nurses throughout the United States and (b) to examine which demographic and work-related components affect the development of compassion satisfaction, compassion fatigue, and burnout in this nursing specialty.\n\n\nDESIGN AND METHODS\nThis was a nonexperimental, descriptive, and predictive study using a self-administered survey. Survey packets including a demographic questionnaire and the Professional Quality of Life Scale version 5 (ProQOL 5) were mailed to 1,000 selected emergency nurses throughout the United States. The ProQOL 5 scale was used to measure the prevalence of compassion satisfaction, compassion fatigue, and burnout among emergency department nurses. Multiple regression using stepwise solution was employed to determine which variables of demographics and work-related characteristics predicted the prevalence of compassion satisfaction, compassion fatigue, and burnout. The α level was set at .05 for statistical significance.\n\n\nFINDINGS\nThe results revealed overall low to average levels of compassion fatigue and burnout and generally average to high levels of compassion satisfaction among this group of emergency department nurses. The low level of manager support was a significant predictor of higher levels of burnout and compassion fatigue among emergency department nurses, while a high level of manager support contributed to a higher level of compassion satisfaction.\n\n\nCONCLUSIONS\nThe results may serve to help distinguish elements in emergency department nurses' work and life that are related to compassion satisfaction and may identify factors associated with higher levels of compassion fatigue and burnout.\n\n\nCLINICAL RELEVANCE\nImproving recognition and awareness of compassion satisfaction, compassion fatigue, and burnout among emergency department nurses may prevent emotional exhaustion and help identify interventions that will help nurses remain empathetic and compassionate professionals.",
"title": ""
},
{
"docid": "75c5d060d99058585292a77a94e75dba",
"text": "In this paper, the recent progress of synaptic electronics is reviewed. The basics of biological synaptic plasticity and learning are described. The material properties and electrical switching characteristics of a variety of synaptic devices are discussed, with a focus on the use of synaptic devices for neuromorphic or brain-inspired computing. Performance metrics desirable for large-scale implementations of synaptic devices are illustrated. A review of recent work on targeted computing applications with synaptic devices is presented.",
"title": ""
},
{
"docid": "c32a719ac619e7a48adf12fd6a534e7c",
"text": "Using smart devices and apps in clinical trials has great potential: this versatile technology is ubiquitously available, broadly accepted, user friendly and it offers integrated sensors for primary data acquisition and data sending features to allow for a hassle free communication with the study sites. This new approach promises to increase efficiency and to lower costs. This article deals with the ethical and legal demands of using this technology in clinical trials with respect to regulation, informed consent, data protection and liability.",
"title": ""
},
{
"docid": "66fce3b6c516a4fa4281d19d6055b338",
"text": "This paper presents the mechatronic design and experimental validation of a novel powered knee-ankle orthosis for testing torque-driven rehabilitation control strategies. The modular actuator of the orthosis is designed with a torque dense motor and a custom low-ratio transmission (24:1) to provide mechanical transparency to the user, allowing them to actively contribute to their joint kinematics during gait training. The 4.88 kg orthosis utilizes frameless components and light materials, such as aluminum alloy and carbon fiber, to reduce its mass. A human subject experiment demonstrates accurate torque control with high output torque during stance and low backdrive torque during swing at fast walking speeds. This work shows that backdrivability, precise torque control, high torque output, and light weight can be achieved in a powered orthosis without the high cost and complexity of variable transmissions, clutches, and/or series elastic components.",
"title": ""
},
{
"docid": "7997cc6aafd50c7ec559270ff69e5d66",
"text": "Cloud computing adoption and diffusion are threatened by unresolved security issues that affect both the cloud provider and the cloud user. In this paper, we show how virtualization can increase the security of cloud computing, by protecting both the integrity of guest virtual machines and the cloud infrastructure components. In particular, we propose a novel architecture, Advanced Cloud Protection System (ACPS), aimed at guaranteeing increased security to cloud resources. ACPS can be deployed on several cloud solutions and can effectively monitor the integrity of guest and infrastructure components while remaining fully transparent to virtual machines and to cloud users. ACPS can locally react to security breaches as well as notify a further security management layer of such events. A prototype of our ACPS proposal is fully implemented on two current open source solutions: Eucalyptus and OpenECP. The prototype is tested against effectiveness and performance. In particular: (a) effectiveness is shown testing our prototype against attacks known in the literature; (b) performance evaluation of the ACPS prototype is carried out under different types of workload. Results show that our proposal is resilient against attacks and that the introduced overhead is small when compared to the provided",
"title": ""
}
] |
scidocsrr
|
c0159657811c724b694af1cb60a2c215
|
How to increase and sustain positive emotion : The effects of expressing gratitude and visualizing best possible selves
|
[
{
"docid": "c03265e4a7d7cc14e6799c358a4af95a",
"text": "Three studies considered the consequences of writing, talking, and thinking about significant events. In Studies 1 and 2, students wrote, talked into a tape recorder, or thought privately about their worst (N = 96) or happiest experience (N = 111) for 15 min each during 3 consecutive days. In Study 3 (N = 112), students wrote or thought about their happiest day; half systematically analyzed, and half repetitively replayed this day. Well-being and health measures were administered before each study's manipulation and 4 weeks after. As predicted, in Study 1, participants who processed a negative experience through writing or talking reported improved life satisfaction and enhanced mental and physical health relative to those who thought about it. The reverse effect for life satisfaction was observed in Study 2, which focused on positive experiences. Study 3 examined possible mechanisms underlying these effects. Students who wrote about their happiest moments--especially when analyzing them--experienced reduced well-being and physical health relative to those who replayed these moments. Results are discussed in light of current understanding of the effects of processing life events.",
"title": ""
},
{
"docid": "f515695b3d404d29a12a5e8e58a91fc0",
"text": "One area of positive psychology analyzes subjective well-being (SWB), people's cognitive and affective evaluations of their lives. Progress has been made in understanding the components of SWB, the importance of adaptation and goals to feelings of well-being, the temperament underpinnings of SWB, and the cultural influences on well-being. Representative selection of respondents, naturalistic experience sampling measures, and other methodological refinements are now used to study SWB and could be used to produce national indicators of happiness.",
"title": ""
}
] |
[
{
"docid": "88804f285f4d608b81a1cd741dbf2b7e",
"text": "Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates.\n We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.",
"title": ""
},
{
"docid": "9256277615e0016992d007b29a2bcf21",
"text": "Three experiments explored how words are learned from hearing them across contexts. Adults watched 40-s videotaped vignettes of parents uttering target words (in sentences) to their infants. Videos were muted except for a beep or nonsense word inserted where each \"mystery word\" was uttered. Participants were to identify the word. Exp. 1 demonstrated that most (90%) of these natural learning instances are quite uninformative, whereas a small minority (7%) are highly informative, as indexed by participants' identification accuracy. Preschoolers showed similar information sensitivity in a shorter experimental version. Two further experiments explored how cross-situational information helps, by manipulating the serial ordering of highly informative vignettes in five contexts. Response patterns revealed a learning procedure in which only a single meaning is hypothesized and retained across learning instances, unless disconfirmed. Neither alternative hypothesized meanings nor details of past learning situations were retained. These findings challenge current models of cross-situational learning which assert that multiple meaning hypotheses are stored and cross-tabulated via statistical procedures. Learners appear to use a one-trial \"fast-mapping\" procedure, even under conditions of referential uncertainty.",
"title": ""
},
{
"docid": "7c27bfa849ba0bd49f9ddaec9beb19b5",
"text": "Very High Spatial Resolution (VHSR) large-scale SAR image databases are still an unresolved issue in the Remote Sensing field. In this work, we propose such a dataset and use it to explore patch-based classification in urban and periurban areas, considering 7 distinct semantic classes. In this context, we investigate the accuracy of large CNN classification models and pre-trained networks for SAR imaging systems. Furthermore, we propose a Generative Adversarial Network (GAN) for SAR image generation and test, whether the synthetic data can actually improve classification accuracy.",
"title": ""
},
{
"docid": "205a38ac9f2df57a33481d36576e7d54",
"text": "Business process improvement initiatives typically employ various process analysis techniques, including evidence-based analysis techniques such as process mining, to identify new ways to streamline current business processes. While plenty of process mining techniques have been proposed to extract insights about the way in which activities within processes are conducted, techniques to understand resource behaviour are limited. At the same time, an understanding of resources behaviour is critical to enable intelligent and effective resource management an important factor which can significantly impact overall process performance. The presence of detailed records kept by today’s organisations, including data about who, how, what, and when various activities were carried out by resources, open up the possibility for real behaviours of resources to be studied. This paper proposes an approach to analyse one aspect of resource behaviour: the manner in which a resource prioritises his/her work. The proposed approach has been formalised, implemented, and evaluated using a number of synthetic and real datasets. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c625221e79bdc508c7c772f5be0458a1",
"text": "Word embeddings that can capture semantic and syntactic information from contexts have been extensively used for various natural language processing tasks. However, existing methods for learning contextbased word embeddings typically fail to capture sufficient sentiment information. This may result in words with similar vector representations having an opposite sentiment polarity (e.g., good and bad), thus degrading sentiment analysis performance. Therefore, this study proposes a word vector refinement model that can be applied to any pre-trained word vectors (e.g., Word2vec and GloVe). The refinement model is based on adjusting the vector representations of words such that they can be closer to both semantically and sentimentally similar words and further away from sentimentally dissimilar words. Experimental results show that the proposed method can improve conventional word embeddings and outperform previously proposed sentiment embeddings for both binary and fine-grained classification on Stanford Sentiment Treebank (SST).",
"title": ""
},
{
"docid": "1e176f66a29b6bd3dfce649da1a4db9d",
"text": "In just a few years, crowdsourcing markets like Mechanical Turk have become the dominant mechanism for for building \"gold standard\" datasets in areas of computer science ranging from natural language processing to audio transcription. The assumption behind this sea change - an assumption that is central to the approaches taken in hundreds of research projects - is that crowdsourced markets can accurately replicate the judgments of the general population for knowledge-oriented tasks. Focusing on the important domain of semantic relatedness algorithms and leveraging Clark's theory of common ground as a framework, we demonstrate that this assumption can be highly problematic. Using 7,921 semantic relatedness judgements from 72 scholars and 39 crowdworkers, we show that crowdworkers on Mechanical Turk produce significantly different semantic relatedness gold standard judgements than people from other communities. We also show that algorithms that perform well against Mechanical Turk gold standard datasets do significantly worse when evaluated against other communities' gold standards. Our results call into question the broad use of Mechanical Turk for the development of gold standard datasets and demonstrate the importance of understanding these datasets from a human-centered point-of-view. More generally, our findings problematize the notion that a universal gold standard dataset exists for all knowledge tasks.",
"title": ""
},
{
"docid": "e3bb490de9489a0c02f023d25f0a94d7",
"text": "During the past two decades, self-efficacy has emerged as a highly effective predictor of students' motivation and learning. As a performance-based measure of perceived capability, self-efficacy differs conceptually and psychometrically from related motivational constructs, such as outcome expectations, self-concept, or locus of control. Researchers have succeeded in verifying its discriminant validity as well as convergent validity in predicting common motivational outcomes, such as students' activity choices, effort, persistence, and emotional reactions. Self-efficacy beliefs have been found to be sensitive to subtle changes in students' performance context, to interact with self-regulated learning processes, and to mediate students' academic achievement. Copyright 2000 Academic Press.",
"title": ""
},
{
"docid": "bef64076bf62d9e8fbb6fbaf5534fdc6",
"text": "This paper presents an application of PageRank, a random-walk model originally devised for ranking Web search results, to ranking WordNet synsets in terms of how strongly they possess a given semantic property. The semantic properties we use for exemplifying the approach are positivity and negativity, two properties of central importance in sentiment analysis. The idea derives from the observation that WordNet may be seen as a graph in which synsets are connected through the binary relation “a term belonging to synset sk occurs in the gloss of synset si”, and on the hypothesis that this relation may be viewed as a transmitter of such semantic properties. The data for this relation can be obtained from eXtended WordNet, a publicly available sensedisambiguated version of WordNet. We argue that this relation is structurally akin to the relation between hyperlinked Web pages, and thus lends itself to PageRank analysis. We report experimental results supporting our intuitions.",
"title": ""
},
{
"docid": "574838d3fecf8e8dfc4254b41d446ad2",
"text": "This paper proposes a new procedure to detect Glottal Closure and Opening Instants (GCIs and GOIs) directly from speech waveforms. The procedure is divided into two successive steps. First a mean-based signal is computed, and intervals where speech events are expected to occur are extracted from it. Secondly, at each interval a precise position of the speech event is assigned by locating a discontinuity in the Linear Prediction residual. The proposed method is compared to the DYPSA algorithm on the CMU ARCTIC database. A significant improvement as well as a better noise robustness are reported. Besides, results of GOI identification accuracy are promising for the glottal source characterization.",
"title": ""
},
{
"docid": "cee0d7bac437a3a98fa7aba31969341b",
"text": "Throughout history, the educational process used different educational technologies which did not significantly alter the manner of learning in the classroom. By implementing e-learning technology to the educational process, new and completely different innovative learning scenarios are made possible, including more active student involvement outside the traditional classroom. The quality of the realization of the educational objective in any learning environment depends primarily on the teacher who creates the educational process, mentors and acts as a moderator in the communication within the educational process, but also relies on the student who acquires the educational content. The traditional classroom learning and e-learning environment enable different manners of adopting educational content, and this paper reveals their key characteristics with the purpose of better use of e-learning technology in the educational process.",
"title": ""
},
{
"docid": "e0c87b957faf9c14ce96ed09f968e8ee",
"text": "It is well-known that the power factor of Vernier machines is small compared to permanent magnet machines. However, the power factor equations already derived show a huge deviation to the finite-element analysis (FEA) when used for Vernier machines with concentrated windings. Therefore, this paper develops an analytic model to calculate the power factor of Vernier machines with concentrated windings and different numbers of flux modulating poles (FMPs) and stator slots. The established model bases on the winding function theory in combination with a magnetic equivalent circuit. Consequently, equations for the q-inductance and for the no-load back-EMF of the machine are derived, thus allowing the calculation of the power factor. Thereby, the model considers stator leakage effects, as they are crucial for a good power factor estimation. Comparing the results of the Vernier machine to those of a pm machine explains the decreased power factor of Vernier machines. In addition, a FEA confirms the results of the derived model.",
"title": ""
},
{
"docid": "1d724b07c232098e2a5e5af2bb1e7c83",
"text": "[2] Brown SJ, McLean WH. One remarkable molecule: filaggrin. J Invest Dermatol 2012;132:751–62. [3] Sandilands A, Terron-Kwiatkowski A, Hull PR, O’Regan GM, Clayton TH, Watson RM, et al. Comprehensive analysis of the gene encoding filaggrin uncovers prevalent and rare mutations in ichthyosis vulgaris and atopic eczema. Nat Genet 2007;39:650–4. [4] Margolis DJ, Apter AJ, Gupta J, Hoffstad O, Papadopoulos M, Campbell LE, et al. The persistence of atopic dermatitis and Filaggrin mutations in a US longitudinal cohort. J Allergy Clin Immunol 2012;130(4):912–7. [5] Smith FJ, Irvine AD, Terron-Kwiatkowski A, Sandilands A, Campbell LE, Zhao Y, et al. Loss-of-function mutations in the gene encoding filaggrin cause ichthyosis vulgaris. Nat Genet 2006;38:337–42. [6] Paternoster L, Standl M, Chen CM, Ramasamy A, Bonnelykke K, Duijts L, et al. Meta-analysis of genome-wide association studies identifies three new risk Table 1 Reliability and validity comparisons for FLG null mutations as assayed by TaqMan and beadchip methods.",
"title": ""
},
{
"docid": "7843fb4bbf2e94a30c18b359076899ab",
"text": "In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms has been proposed which can be used with general Fourier subsampling patterns. However, the design of these subsampling patterns has typically been considered in isolation from the reconstruction rule and the anatomy under consideration. In this paper, we propose a learning-based framework for optimizing MRI subsampling patterns for a specific reconstruction rule and anatomy, considering both the noiseless and noisy settings. Our learning algorithm has access to a representative set of training signals, and searches for a sampling pattern that performs well on average for the signals in this set. We present a novel parameter-free greedy mask selection method and show it to be effective for a variety of reconstruction rules and performance metrics. Moreover, we also support our numerical findings by providing a rigorous justification of our framework via statistical learning theory.",
"title": ""
},
{
"docid": "47199e959f3b10c6fa6b4b8c68434b94",
"text": "The everyday use of smartphones with high quality built-in cameras has lead to an increase in museum visitors' use of these devices to document and share their museum experiences. In this paper, we investigate how one particular photo sharing application, Instagram, is used to communicate visitors' experiences while visiting a museum of natural history. Based on an analysis of 222 instagrams created in the museum, as well as 14 interviews with the visitors who created them, we unpack the compositional resources and concerns contributing to the creation of instagrams in this particular context. By re-categorizing and re-configuring the museum environment, instagrammers work to construct their own narratives from their visits. These findings are then used to discuss what emerging multimedia practices imply for the visitors' engagement with and documentation of museum exhibits. Drawing upon these practices, we discuss the connection between online social media dialogue and the museum site.",
"title": ""
},
{
"docid": "4dc38ae50a2c806321020de4a140ed5f",
"text": "Transcranial direct current stimulation (tDCS) is a promising technology to enhance cognitive and physical performance. One of the major areas of interest is the enhancement of memory function in healthy individuals. The early arrival of tDCS on the market for lifestyle uses and cognitive enhancement purposes lead to the voicing of some important ethical concerns, especially because, to date, there are no official guidelines or evaluation procedures to tackle these issues. The aim of this article is to review ethical issues related to uses of tDCS for memory enhancement found in the ethics and neuroscience literature and to evaluate how realistic and scientifically well-founded these concerns are? In order to evaluate how plausible or speculative each issue is, we applied the methodological framework described by Racine et al. (2014) for \"informed and reflective\" speculation in bioethics. This framework could be succinctly presented as requiring: (1) the explicit acknowledgment of factual assumptions and identification of the value attributed to them; (2) the validation of these assumptions with interdisciplinary literature; and (3) the adoption of a broad perspective to support more comprehensive reflection on normative issues. We identified four major considerations associated with the development of tDCS for memory enhancement: safety, autonomy, justice and authenticity. In order to assess the seriousness and likelihood of harm related to each of these concerns, we analyzed the assumptions underlying the ethical issues, and the level of evidence for each of them. We identified seven distinct assumptions: prevalence, social acceptance, efficacy, ideological stance (bioconservative vs. libertarian), potential for misuse, long term side effects, and the delivery of complete and clear information. We conclude that ethical discussion about memory enhancement via tDCS sometimes involves undue speculation, and closer attention to scientific and social facts would bring a more nuanced analysis. At this time, the most realistic concerns are related to safety and violation of users' autonomy by a breach of informed consent, as potential immediate and long-term health risks to private users remain unknown or not well defined. Clear and complete information about these risks must be provided to research participants and consumers of tDCS products or related services. Broader public education initiatives and warnings would also be worthwhile to reach those who are constructing their own tDCS devices.",
"title": ""
},
{
"docid": "65e320e250cbeb8942bf00f335be4cbd",
"text": "In this paper, we propose a deep progressive reinforcement learning (DPRL) method for action recognition in skeleton-based videos, which aims to distil the most informative frames and discard ambiguous frames in sequences for recognizing actions. Since the choices of selecting representative frames are multitudinous for each video, we model the frame selection as a progressive process through deep reinforcement learning, during which we progressively adjust the chosen frames by taking two important factors into account: (1) the quality of the selected frames and (2) the relationship between the selected frames to the whole video. Moreover, considering the topology of human body inherently lies in a graph-based structure, where the vertices and edges represent the hinged joints and rigid bones respectively, we employ the graph-based convolutional neural network to capture the dependency between the joints for action recognition. Our approach achieves very competitive performance on three widely used benchmarks.",
"title": ""
},
{
"docid": "bbf987eef74d76cf2916ae3080a2b174",
"text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.",
"title": ""
},
{
"docid": "81190a4c576f86444a95e75654bddf29",
"text": "Enforcing a variety of security measures (such as intrusion detection systems, and so on) can provide a certain level of protection to computer networks. However, such security practices often fall short in face of zero-day attacks. Due to the information asymmetry between attackers and defenders, detecting zero-day attacks remains a challenge. Instead of targeting individual zero-day exploits, revealing them on an attack path is a substantially more feasible strategy. Such attack paths that go through one or more zero-day exploits are called zero-day attack paths. In this paper, we propose a probabilistic approach and implement a prototype system ZePro for zero-day attack path identification. In our approach, a zero-day attack path is essentially a graph. To capture the zero-day attack, a dependency graph named object instance graph is first built as a supergraph by analyzing system calls. To further reveal the zero-day attack paths hidden in the supergraph, our system builds a Bayesian network based upon the instance graph. By taking intrusion evidence as input, the Bayesian network is able to compute the probabilities of object instances being infected. Connecting the high-probability-instances through dependency relations forms a path, which is the zero-day attack path. The experiment results demonstrate the effectiveness of ZePro for zero-day attack path identification.",
"title": ""
},
{
"docid": "4b90fefa981e091ac6a5d2fd83e98b66",
"text": "This paper explores an analysis-aware data cleaning architecture for a large class of SPJ SQL queries. In particular, we propose QuERy, a novel framework for integrating entity resolution (ER) with query processing. The aim of QuERy is to correctly and efficiently answer complex queries issued on top of dirty data. The comprehensive empirical evaluation of the proposed solution demonstrates its significant advantage in terms of efficiency over the traditional techniques for the given problem settings.",
"title": ""
},
{
"docid": "3630c575bf7b5250930c7c54d8a1c6d0",
"text": "The RCSB Protein Data Bank (RCSB PDB, http://www.rcsb.org) provides access to 3D structures of biological macromolecules and is one of the leading resources in biology and biomedicine worldwide. Our efforts over the past 2 years focused on enabling a deeper understanding of structural biology and providing new structural views of biology that support both basic and applied research and education. Herein, we describe recently introduced data annotations including integration with external biological resources, such as gene and drug databases, new visualization tools and improved support for the mobile web. We also describe access to data files, web services and open access software components to enable software developers to more effectively mine the PDB archive and related annotations. Our efforts are aimed at expanding the role of 3D structure in understanding biology and medicine.",
"title": ""
}
] |
scidocsrr
|
01a6c16b8117bcfc2d9e2eca4cbba9e3
|
Predicting eye fixations using convolutional neural networks
|
[
{
"docid": "0cdf08bd9c2e63f0c9bb1dd7472a23a8",
"text": "Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80% within their first two fixations; furthermore, they exhibit more similar scanpaths when faces are present. Remarkably, our model’s predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses.",
"title": ""
}
] |
[
{
"docid": "c96942b01c05fcfd10a2efcf2b1ca2de",
"text": "This review highlights the current advances in knowledge about the safety, efficacy, quality control, marketing and regulatory aspects of botanical medicines. Phytotherapeutic agents are standardized herbal preparations consisting of complex mixtures of one or more plants which contain as active ingredients plant parts or plant material in the crude or processed state. A marked growth in the worldwide phytotherapeutic market has occurred over the last 15 years. For the European and USA markets alone, this will reach about $7 billion and $5 billion per annum, respectively, in 1999, and has thus attracted the interest of most large pharmaceutical companies. Insufficient data exist for most plants to guarantee their quality, efficacy and safety. The idea that herbal drugs are safe and free from side effects is false. Plants contain hundreds of constituents and some of them are very toxic, such as the most cytotoxic anti-cancer plant-derived drugs, digitalis and the pyrrolizidine alkaloids, etc. However, the adverse effects of phytotherapeutic agents are less frequent compared with synthetic drugs, but well-controlled clinical trials have now confirmed that such effects really exist. Several regulatory models for herbal medicines are currently available including prescription drugs, over-the-counter substances, traditional medicines and dietary supplements. Harmonization and improvement in the processes of regulation is needed, and the general tendency is to perpetuate the German Commission E experience, which combines scientific studies and traditional knowledge (monographs). Finally, the trend in the domestication, production and biotechnological studies and genetic improvement of medicinal plants, instead of the use of plants harvested in the wild, will offer great advantages, since it will be possible to obtain uniform and high quality raw materials which are fundamental to the efficacy and safety of herbal drugs.",
"title": ""
},
{
"docid": "f6449c1e77e5310cd0cae5718ed9591f",
"text": "Individuals with strong self-regulated learning (SRL) skills, characterized by the ability to plan, manage and control their learning process, can learn faster and outperform those with weaker SRL skills. SRL is critical in learning environments that provide low levels of support and guidance, as is commonly the case in Massive Open Online Courses (MOOCs). Learners can be trained to engage in SRL and actively supported with prompts and activities. However, effective implementation of learner support systems in MOOCs requires an understanding of which SRL strategies are most effective and how these strategies manifest in online behavior. Moreover, identifying learner characteristics that are predictive of weaker SRL skills can advance efforts to provide targeted support without obtrusive survey instruments. We investigated SRL in a sample of 4,831 learners across six MOOCs based on individual records of overall course achievement, interactions with course content, and survey responses. We found that goal setting and strategic planning predicted attainment of personal course goals, while help seeking was associated with lower goal attainment. Learners with stronger SRL skills were more likely to revisit previously studied course materials, especially course assessments. Several learner characteristics, including demographics and motivation, predicted learners’ SRL skills. We discuss implications for theory and the development of learning environments that provide",
"title": ""
},
{
"docid": "0d41a6d4cf8c42ccf58bccd232a46543",
"text": "Novelty detection is the ident ification of new or unknown data or signal that a machine learning system is not aware of during training. In this paper we focus on neural network based approaches for novelty detection. Statistical approaches are covered in part-I paper.",
"title": ""
},
{
"docid": "43f3c28db4732ef07d04c3bda628ab66",
"text": "This research proposes a conceptual framework for achieving a secure Internet of Things (IoT) routing that will enforce confidentiality and integrity during the routing process in IoT networks. With billions of IoT devices likely to be interconnected globally, the big issue is how to secure the routing of data in the underlying networks from various forms of attacks. Users will not feel secure if they know their private data could easily be accessed and compromised by unauthorized individuals or machines over the network. It is within this context that we present the design of SecTrust, a lightweight secure trust-based routing framework to identify and isolate common routing attacks in IoT networks. The proposed framework is based on the successful interactions between the IoT sensor nodes, which effectively is a reflection of their trustworthy behavior.",
"title": ""
},
{
"docid": "a4a809852b08a7f0a83fc97fcd9b0b9d",
"text": "This paper proposes the use of hybrid Hidden Markov Model (HMM)/Artificial Neural Network (ANN) models for recognizing unconstrained offline handwritten texts. The structural part of the optical models has been modeled with Markov chains, and a Multilayer Perceptron is used to estimate the emission probabilities. This paper also presents new techniques to remove slope and slant from handwritten text and to normalize the size of text images with supervised learning methods. Slope correction and size normalization are achieved by classifying local extrema of text contours with Multilayer Perceptrons. Slant is also removed in a nonuniform way by using Artificial Neural Networks. Experiments have been conducted on offline handwritten text lines from the IAM database, and the recognition rates achieved, in comparison to the ones reported in the literature, are among the best for the same task.",
"title": ""
},
{
"docid": "f32cfe5e4f781f3ef0da302506f4d65a",
"text": "In this work, we estimate the deterioration of NLP processing given an estimate of the amount and nature of grammatical errors in a text. From a corpus of essays written by English-language learners, we extract ungrammatical sentences, controlling the number and types of errors in each sentence. We focus on six categories of errors that are commonly made by English-language learners, and consider sentences containing one or more of these errors. To evaluate the effect of grammatical errors, we measure the deterioration of ungrammatical dependency parses using the labeled F-score, an adaptation of the labeled attachment score. We find notable differences between the influence of individual error types on the dependency parse, as well as interactions between multiple errors.",
"title": ""
},
{
"docid": "9aab4a607de019226e9465981b82f9b8",
"text": "Color is frequently used to encode values in visualizations. For color encodings to be effective, the mapping between colors and values must preserve important differences in the data. However, most guidelines for effective color choice in visualization are based on either color perceptions measured using large, uniform fields in optimal viewing environments or on qualitative intuitions. These limitations may cause data misinterpretation in visualizations, which frequently use small, elongated marks. Our goal is to develop quantitative metrics to help people use color more effectively in visualizations. We present a series of crowdsourced studies measuring color difference perceptions for three common mark types: points, bars, and lines. Our results indicate that peoples' abilities to perceive color differences varies significantly across mark types. Probabilistic models constructed from the resulting data can provide objective guidance for designers, allowing them to anticipate viewer perceptions in order to inform effective encoding design.",
"title": ""
},
{
"docid": "c997e54b9bc82dcf0ec896710b744cd8",
"text": "The authors report a case of acute compartment syndrome in the thigh in a 19-year-old man with multiple injuries including fracture of the femoral diaphysis. Decompressive fasciotomy was performed emergently. Complete progressive closure of the wound without split-thickness skin grafting was achieved using a modified shoelace technique: sutures were run inside wide drains placed in contact with the muscles and were then tightened over the skin. These drains enlarged the contact area between sutures and muscles, preventing damage to muscles.",
"title": ""
},
{
"docid": "b56a6fe9c9d4b45e9d15054004fac918",
"text": "Code-switching refers to the phenomena of mixing of words or phrases from foreign languages while communicating in a native language by the multilingual speakers. Codeswitching is a global phenomenon and is widely accepted in multilingual communities. However, for training the language model (LM) for such tasks, a very limited code-switched textual resources are available as yet. In this work, we present an approach to reduce the perplexity (PPL) of Hindi-English code-switched data when tested over the LM trained on purely native Hindi data. For this purpose, we propose a novel textual feature which allows the LM to predict the code-switching instances. The proposed feature is referred to as code-switching factor (CS-factor). Also, we developed a tagger that facilitates the automatic tagging of the code-switching instances. This tagger is trained on a development data and assigns an equivalent class of foreign (English) words to each of the potential native (Hindi) words. For this study, the textual resource has been created by crawling the blogs from a couple of websites educating about the usage of the Internet. In the context of recognition of the code-switching data, the proposed technique is found to yield a substantial improvement in terms of PPL.",
"title": ""
},
{
"docid": "44f3a23a60195314e46c82cb959df224",
"text": "We present a novel online unsupervised method for face identity learning from video streams. The method exploits deep face descriptors together with a memory based learning mechanism that takes advantage of the temporal coherence of visual data. Specifically, we introduce a discriminative descriptor matching solution based on Reverse Nearest Neighbour and a forgetting strategy that detect redundant descriptors and discard them appropriately while time progresses. It is shown that the proposed learning procedure is asymptotically stable and can be effectively used in relevant applications like multiple face identification and tracking from unconstrained video streams. Experimental results show that the proposed method achieves comparable results in the task of multiple face tracking and better performance in face identification with offline approaches exploiting future information. Code will be publicly available.",
"title": ""
},
{
"docid": "7a8fb7b1383b7f7562dd319a6f43fcab",
"text": "An important problem that online work marketplaces face is grouping clients into clusters, so that in each cluster clients are similar with respect to their hiring criteria. Such a separation allows the marketplace to \"learn\" more accurately the hiring criteria in each cluster and recommend the right contractor to each client, for a successful collaboration. We propose a Maximum Likelihood definition of the \"optimal\" client clustering along with an efficient Expectation-Maximization clustering algorithm that can be applied in large marketplaces. Our results on the job hirings at oDesk over a seven-month period show that our client-clustering approach yields significant gains compared to \"learning\" the same hiring criteria for all clients. In addition, we analyze the clustering results to find interesting differences between the hiring criteria in the different groups of clients.",
"title": ""
},
{
"docid": "ca546e8061ca984a7ee57884ed05f340",
"text": "This paper is focused on the adaptive noise cancellation of speech signal using the least mean square (LMS) and normalized least mean square method (NLMS). Adaptive Noise Cancellation is an alternative way of cancelling noise present in a corrupted signal. In this technique, evaluation of distorted signal by additive noise or interference achieved with no a priori estimates of signal or noise. A comparative study is carried out using LMS and NLSM methods. Result shows that these methods has potential in noise cancellation and can be used for variety of applications.",
"title": ""
},
{
"docid": "840a8befafbf6fc43d19b890431f3953",
"text": "The prevalence of high hyperlipemia is increasing around the world. Our aims are to analyze the relationship of triglyceride (TG) and cholesterol (TC) with indexes of liver function and kidney function, and to develop a prediction model of TG, TC in overweight people. A total of 302 adult healthy subjects and 273 overweight subjects were enrolled in this study. The levels of fasting indexes of TG (fs-TG), TC (fs-TC), blood glucose, liver function, and kidney function were measured and analyzed by correlation analysis and multiple linear regression (MRL). The back propagation artificial neural network (BP-ANN) was applied to develop prediction models of fs-TG and fs-TC. The results showed there was significant difference in biochemical indexes between healthy people and overweight people. The correlation analysis showed fs-TG was related to weight, height, blood glucose, and indexes of liver and kidney function; while fs-TC was correlated with age, indexes of liver function (P < 0.01). The MRL analysis indicated regression equations of fs-TG and fs-TC both had statistic significant (P < 0.01) when included independent indexes. The BP-ANN model of fs-TG reached training goal at 59 epoch, while fs-TC model achieved high prediction accuracy after training 1000 epoch. In conclusions, there was high relationship of fs-TG and fs-TC with weight, height, age, blood glucose, indexes of liver function and kidney function. Based on related variables, the indexes of fs-TG and fs-TC can be predicted by BP-ANN models in overweight people.",
"title": ""
},
{
"docid": "5bb36646f4db3d2efad8e0ee828b3022",
"text": "PURPOSE\nWhile modern clinical CT scanners under normal circumstances produce high quality images, severe artifacts degrade the image quality and the diagnostic value if metal prostheses or other metal objects are present in the field of measurement. Standard methods for metal artifact reduction (MAR) replace those parts of the projection data that are affected by metal (the so-called metal trace or metal shadow) by interpolation. However, while sinogram interpolation methods efficiently remove metal artifacts, new artifacts are often introduced, as interpolation cannot completely recover the information from the metal trace. The purpose of this work is to introduce a generalized normalization technique for MAR, allowing for efficient reduction of metal artifacts while adding almost no new ones. The method presented is compared to a standard MAR method, as well as MAR using simple length normalization.\n\n\nMETHODS\nIn the first step, metal is segmented in the image domain by thresholding. A 3D forward projection identifies the metal trace in the original projections. Before interpolation, the projections are normalized based on a 3D forward projection of a prior image. This prior image is obtained, for example, by a multithreshold segmentation of the initial image. The original rawdata are divided by the projection data of the prior image and, after interpolation, denormalized again. Simulations and measurements are performed to compare normalized metal artifact reduction (NMAR) to standard MAR with linear interpolation and MAR based on simple length normalization.\n\n\nRESULTS\nPromising results for clinical spiral cone-beam data are presented in this work. Included are patients with hip prostheses, dental fillings, and spine fixation, which were scanned at pitch values ranging from 0.9 to 3.2. Image quality is improved considerably, particularly for metal implants within bone structures or in their proximity. The improvements are evaluated by comparing profiles through images and sinograms for the different methods and by inspecting ROIs. NMAR outperforms both other methods in all cases. It reduces metal artifacts to a minimum, even close to metal regions. Even for patients with dental fillings, which cause most severe artifacts, satisfactory results are obtained with NMAR. In contrast to other methods, NMAR prevents the usual blurring of structures close to metal implants if the metal artifacts are moderate.\n\n\nCONCLUSIONS\nNMAR clearly outperforms the other methods for both moderate and severe artifacts. The proposed method reliably reduces metal artifacts from simulated as well as from clinical CT data. Computationally efficient and inexpensive compared to iterative methods, NMAR can be used as an additional step in any conventional sinogram inpainting-based MAR method.",
"title": ""
},
{
"docid": "a87ab618f64c9b4f33d5102e1374f1e2",
"text": "Recent genome sequencing studies have shown that the somatic mutations that drive cancer development are distributed across a large number of genes. This mutational heterogeneity complicates efforts to distinguish functional mutations from sporadic, passenger mutations. Since cancer mutations are hypothesized to target a relatively small number of cellular signaling and regulatory pathways, a common practice is to assess whether known pathways are enriched for mutated genes. We introduce an alternative approach that examines mutated genes in the context of a genome-scale gene interaction network. We present a computationally efficient strategy for de novo identification of subnetworks in an interaction network that are mutated in a statistically significant number of patients. This framework includes two major components. First, we use a diffusion process on the interaction network to define a local neighborhood of \"influence\" for each mutated gene in the network. Second, we derive a two-stage multiple hypothesis test to bound the false discovery rate (FDR) associated with the identified subnetworks. We test these algorithms on a large human protein-protein interaction network using somatic mutation data from glioblastoma and lung adenocarcinoma samples. We successfully recover pathways that are known to be important in these cancers and also identify additional pathways that have been implicated in other cancers but not previously reported as mutated in these samples. We anticipate that our approach will find increasing use as cancer genome studies increase in size and scope.",
"title": ""
},
{
"docid": "a75919f4a4abcc0796ae6ba269cb91c1",
"text": "Interacting systems are prevalent in nature, from dynamical systems in physics to complex societal dynamics. The interplay of components can give rise to complex behavior, which can often be explained using a simple model of the system’s constituent parts. In this work, we introduce the neural relational inference (NRI) model: an unsupervised model that learns to infer interactions while simultaneously learning the dynamics purely from observational data. Our model takes the form of a variational auto-encoder, in which the latent code represents the underlying interaction graph and the reconstruction is based on graph neural networks. In experiments on simulated physical systems, we show that our NRI model can accurately recover ground-truth interactions in an unsupervised manner. We further demonstrate that we can find an interpretable structure and predict complex dynamics in real motion capture and sports tracking data.",
"title": ""
},
{
"docid": "368f904533e17beec78d347ee8ceabb1",
"text": "A brand community from a customer-experiential perspective is a fabric of relationships in which the customer is situated. Crucial relationships include those between the customer and the brand, between the customer and the firm, between the customer and the product in use, and among fellow customers. The authors delve ethnographically into a brand community and test key findings through quantitative methods. Conceptually, the study reveals insights that differ from prior research in four important ways: First, it expands the definition of a brand community to entities and relationships neglected by previous research. Second, it treats vital characteristics of brand communities, such as geotemporal concentrations and the richness of social context, as dynamic rather than static phenomena. Third, it demonstrates that marketers can strengthen brand communities by facilitating shared customer experiences in ways that alter those dynamic characteristics. Fourth, it yields a new and richer conceptualization of customer loyalty as integration in a brand community.",
"title": ""
},
{
"docid": "2a55dd98b47bd6b79b5e1d441d23c683",
"text": "This case study explores how a constructivist-based instructional design helped adult learners learn in an online learning environment. Two classes of adult learners pursuing professional development and registered in a webbased course were studied. The data consisted of course documents, submitted artefacts, surveys, interviews, in-class observations, and online observations. The study found that the majority of the learners were engaged in two facets of learning. On the one hand, the instructional activities requiring collaboration and interaction helped the learners support one another’s learning, from which most claimed to have benefited. On the other hand, the constructivistbased course assisted many learners to develop a sense of becoming more responsible, self-directed learners. Overall, the social constructivist style of instructional strategy seems promising to facilitate adult learning, which not only helps change learners’ perceptions of the online learning, but also assists them to learn in a more collaborative, authentic and responsible way. The study, however, also disclosed that in order to maintain high-quality learning, appropriate assessment plans and adequate facilitation must be particularly reinforced. A facilitation model is thus suggested. Introduction With the rising prevalence of the Internet, technological media for teaching and learning are becoming increasingly interactive, widely distributed and collaborative (Bonk, Hara, Dennen, Malikowski & Supplee, 2000; Chang, 2003). A collaborative, interactive, constructivist online learning environment, as opposed to a passive learning environment, is found to be better able to help students learn more actively and effectively (Murphy, Mahoney, Chen, Mendoza-Diaz & Yang, 2005). Online learning provides learners, especially adult learners, with an opportunity and flexibility for learning at Note: The research was sponsored by the National Science Council, NSC-95-2520-S-271-001. British Journal of Educational Technology Vol 41 No 5 2010 706–720 doi:10.1111/j.1467-8535.2009.00965.x © 2009 The Author. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. any time and in any place. As lifelong learning is considered both an economic and a social and individual interest (White, 2007), how to assist general adult learners to learn more practically and persistently through the online learning environment is of great interest. The purpose of this study is to explore whether and how nondegreepursuing adult learners benefit from engaging in a constructivist-based online course. This study first briefly reviews the notion of constructivist learning, and then the characteristics of adult learners and adult learning, followed by discussing online instructional strategies designed based on constructivist principles. Two online courses offered for adult learners are investigated to address the research questions. In addition to reporting the findings, a facilitation model for improving the constructivist-based online course geared towards adult learners is also provided at the end. The concept of constructivist learning Constructivist learning arose from Piagetian and Vygotskian perspectives (Palincsar, 1998), emphasising the impact of constructed knowledge on the individual’s active, reflective thinking. While Piaget focused more on individual cognitive constructivism, Vygotsky stressed that sociocultural systems have a major impact on an individual’s learning (Siegler, 1998). According to social constructivist theory, knowledge is socially situated and is constructed through reflection on one’s own thoughts and experiences, as well as other learners’ ideas. Dewey (1938) believed that individual development is dependent upon the existing social environmental context and argued that students should learn from the genuine world through continuous interaction with others. Lave and Wenger (1991) asserted that learning is socially situated with members’ active participation in their routine, patterned activities. A constructivist, dialogical instructional approach should focus on learning about ‘why’ and learning about ‘how’, rather than conducting learning itself (Scott, 2001). In the constructivist learning environment, students are encouraged to actively engage in learning: to discuss, argue, negotiate ideas, and to collaboratively solve problems; teachers design and provide the learning context and facilitate learning activities (Palincsar). Because of their rich life and employment experience, the social, situated nature of learning through practices appears particularly authentic and appropriate for adult learners. Adult learners and adult learning The success of adult learning greatly depends upon individuals’ maturation and experiences (Mezirow, 1991, 1997; Wang, Sierra & Folger, 2003) contended that the focus of adult learning is on assisting them to become independent thinkers, rather than passive knowledge receivers. However, like younger students, adult learners also need motivation to sustain their learning, particularly those less engaged working adults (Priest, 2000). To achieve this, the course curriculum must be tailored to individual adult’s learning needs, interests, abilities and experiences (Lindeman, 1926). Learners may learn more effectively when instructional activities are designed in accordance with their personal needs, characteristics and, most importantly, their life context (Knowles, 1990). Knowles (1986) proposed the concept of contract learning as the fundamental platform for organising individual adult learning. The idea of contract learning hinges on individual learners planning their own learning based on their Constructivist adult online learning 707 © 2009 The Author. Journal compilation © 2009 Becta. learning needs, prior experiences, interests, goals and self-competence. The progress of the learning contract is based upon the learners’ successfully comprehending what they have learned so far (Scott, 2001). When learners set up their own learning objectives and learning outcomes through the learning contract process, they will better understand their learning style and will have better access to the desired course content (Boyer, 2003). Instructional strategies for facilitating constructivist online learning To implement a constructivist-based online course, various instructional strategies have been implemented, such as requiring students to engage in collaborative, contextualised learning by simulating and assuming an authentic role that is real in the authentic society (Auyeung, 2004; Maor, 2003; Martens, Bastiaens & Kirschner, 2007); setting a collective goal and a shared vision to motivate students’ participation and contribution levels (Gilbert & Driscoll, 2002); and requiring students to be in charge of a discussion of their teamwork (Harmon & Jones, 2000). Some online facilitators required students to plan their own learning goals, set their learning pace, and develop the methodology to achieve the set goals (Boyer, 2003; Kochtanek & Hein, 2000). While learners are expected to assume more responsibility for their learning, the role of online facilitators is crucial (Kochtanek & Hein). A number of online educators suggest that the facilitation tasks include providing feedback to learners and a summary of or specific comments on the discussed issues at the end of class discussions (eg, Graham, Cagiltay, Lim, Craner & Duffy, 2001; Maor), and intervening and promoting students’ participation in the discussion when it becomes stagnant (eg, Auyeung, 2004; Maor). Encouraging students to provide timely responses and feedback to class members helps boost the students’ sense of participation and learning in online learning communities (Gilbert & Driscoll, 2002; Hill, Raven & Han, 2002; Wegerif, 1998), which further helps boost students’ achievement (Moller, Harvey, Downs & Godshalk, 2000). Some online facilitators reinforced students’ interaction and engagement by laying out clear assessment specifications and setting aside a high percentage of the grade to the class-level online discussion activity (Maor). To facilitate online discussion activities, Murphy et al (2005) proposed a constructivist model, which involves three levels of facilitation: (1) the instructor’s mentoring (guiding the learners to develop cognitive and metacognitive skills), (2) teaching assistants’ (TA) coaching (monitoring learners in developing task management skills), and (3) learner facilitators’ moderation (facilitating required learning activities). Salmon (2002) proposed a five-stage model to facilitate online teaching and learning, in which varied facilitation skills and instructional activities are recommended in different learning stages. The five stages are: (1) access and motivation (setting up the system, welcoming and encouraging), (2) socialisation (establishing cultural, social learning environments), (3) information exchange (facilitating, supporting use of course materials), (4) knowledge construction (conferencing, moderating process), and (5) development (helping achieve personal goals) stages. When designing social constructivist pedagogy for adult learners, Huang (2002) suggested that six instructional principles be considered: interactive learning (interacting with the instructor and peers, rather 708 British Journal of Educational Technology Vol 41 No 5 2010 © 2009 The Author. Journal compilation © 2009 Becta. than engaging in isolated learning), collaborative learning (engaging in collaborative knowledge construction, social negotiation, and reflection), facilitating learning (providing a safe, positive learning environment for sharing ideas and thoughts), authentic learning (connecting learning content to real-life experiences), student-centred learning (emphasising self-directed, experiential learning), and high-quality learning",
"title": ""
},
{
"docid": "9e865969535469357f2600985750d78e",
"text": "Patients with pathological laughter and crying (PLC) are subject to relatively uncontrollable episodes of laughter, crying or both. The episodes occur either without an apparent triggering stimulus or following a stimulus that would not have led the subject to laugh or cry prior to the onset of the condition. PLC is a disorder of emotional expression rather than a primary disturbance of feelings, and is thus distinct from mood disorders in which laughter and crying are associated with feelings of happiness or sadness. The traditional and currently accepted view is that PLC is due to the damage of pathways that arise in the motor areas of the cerebral cortex and descend to the brainstem to inhibit a putative centre for laughter and crying. In that view, the lesions 'disinhibit' or 'release' the laughter and crying centre. The neuroanatomical findings in a recently studied patient with PLC, along with new knowledge on the neurobiology of emotion and feeling, gave us an opportunity to revisit the traditional view and propose an alternative. Here we suggest that the critical PLC lesions occur in the cerebro-ponto-cerebellar pathways and that, as a consequence, the cerebellar structures that automatically adjust the execution of laughter or crying to the cognitive and situational context of a potential stimulus, operate on the basis of incomplete information about that context, resulting in inadequate and even chaotic behaviour.",
"title": ""
},
{
"docid": "32a0944d7722090860cf3868a50c4ba1",
"text": "This paper addresses a cost-effective, flexible solution of underground mine workers' safety. A module of MEMS based sensors are used for underground environment monitoring and automating progression of measurement data through digital wireless communication technique is proposed with high accuracy, smooth control and reliability. A microcontroller is used for collecting data and making decision, based on which the mine worker is informed through alarm as well as voice system. The voice system with both microphone and speaker, transforms into digital signal and effectively communicate wirelessly with the ground control centre computer. ZigBee, based on IEEE 802.15.4 standard is used for this short distance transmission between the hardware fitted with the mine worker and the ground control centre.",
"title": ""
}
] |
scidocsrr
|
d3e0cc84199f9795bfe1f2001d87685e
|
Aromatase inhibitors versus tamoxifen in early breast cancer: patient-level meta-analysis of the randomised trials
|
[
{
"docid": "f2b291fd6dacf53ed88168d7e1e4ecce",
"text": "BACKGROUND\nAs trials of 5 years of tamoxifen in early breast cancer mature, the relevance of hormone receptor measurements (and other patient characteristics) to long-term outcome can be assessed increasingly reliably. We report updated meta-analyses of the trials of 5 years of adjuvant tamoxifen.\n\n\nMETHODS\nWe undertook a collaborative meta-analysis of individual patient data from 20 trials (n=21,457) in early breast cancer of about 5 years of tamoxifen versus no adjuvant tamoxifen, with about 80% compliance. Recurrence and death rate ratios (RRs) were from log-rank analyses by allocated treatment.\n\n\nFINDINGS\nIn oestrogen receptor (ER)-positive disease (n=10,645), allocation to about 5 years of tamoxifen substantially reduced recurrence rates throughout the first 10 years (RR 0·53 [SE 0·03] during years 0-4 and RR 0·68 [0·06] during years 5-9 [both 2p<0·00001]; but RR 0·97 [0·10] during years 10-14, suggesting no further gain or loss after year 10). Even in marginally ER-positive disease (10-19 fmol/mg cytosol protein) the recurrence reduction was substantial (RR 0·67 [0·08]). In ER-positive disease, the RR was approximately independent of progesterone receptor status (or level), age, nodal status, or use of chemotherapy. Breast cancer mortality was reduced by about a third throughout the first 15 years (RR 0·71 [0·05] during years 0-4, 0·66 [0·05] during years 5-9, and 0·68 [0·08] during years 10-14; p<0·0001 for extra mortality reduction during each separate time period). Overall non-breast-cancer mortality was little affected, despite small absolute increases in thromboembolic and uterine cancer mortality (both only in women older than 55 years), so all-cause mortality was substantially reduced. In ER-negative disease, tamoxifen had little or no effect on breast cancer recurrence or mortality.\n\n\nINTERPRETATION\n5 years of adjuvant tamoxifen safely reduces 15-year risks of breast cancer recurrence and death. ER status was the only recorded factor importantly predictive of the proportional reductions. Hence, the absolute risk reductions produced by tamoxifen depend on the absolute breast cancer risks (after any chemotherapy) without tamoxifen.\n\n\nFUNDING\nCancer Research UK, British Heart Foundation, and Medical Research Council.",
"title": ""
}
] |
[
{
"docid": "e6704cac805b39fe7f321f095a92ebf4",
"text": "Crowd counting is a challenging task, mainly due to the severe occlusions among dense crowds. This paper aims to take a broader view to address crowd counting from the perspective of semantic modeling. In essence, crowd counting is a task of pedestrian semantic analysis involving three key factors: pedestrians, heads, and their context structure. The information of different body parts is an important cue to help us judge whether there exists a person at a certain position. Existing methods usually perform crowd counting from the perspective of directly modeling the visual properties of either the whole body or the heads only, without explicitly capturing the composite body-part semantic structure information that is crucial for crowd counting. In our approach, we first formulate the key factors of crowd counting as semantic scene models. Then, we convert the crowd counting problem into a multi-task learning problem, such that the semantic scene models are turned into different sub-tasks. Finally, the deep convolutional neural networks are used to learn the sub-tasks in a unified scheme. Our approach encodes the semantic nature of crowd counting and provides a novel solution in terms of pedestrian semantic analysis. In experiments, our approach outperforms the state-of-the-art methods on four benchmark crowd counting data sets. The semantic structure information is demonstrated to be an effective cue in scene of crowd counting.",
"title": ""
},
{
"docid": "c61f68104b2d058acb0d16c89e4b1454",
"text": "Recently, training with adversarial examples, which are generated by adding a small but worst-case perturbation on input examples, has improved the generalization performance of neural networks. In contrast to the biased individual inputs to enhance the generality, this paper introduces adversarial dropout, which is a minimal set of dropouts that maximize the divergence between 1) the training supervision and 2) the outputs from the network with the dropouts. The identified adversarial dropouts are used to automatically reconfigure the neural network in the training process, and we demonstrated that the simultaneous training on the original and the reconfigured network improves the generalization performance of supervised and semi-supervised learning tasks on MNIST, SVHN, and CIFAR-10. We analyzed the trained model to find the performance improvement reasons. We found that adversarial dropout increases the sparsity of neural networks more than the standard dropout. Finally, we also proved that adversarial dropout is a regularization term with a rank-valued hyper parameter that is different from a continuous-valued parameter to specify the strength of the regularization.",
"title": ""
},
{
"docid": "ab47dbcafba637ae6e3b474642439bd3",
"text": "Ear detection from a profile face image is an important step in many applications including biometric recognition. But accurate and rapid detection of the ear for real-time applications is a challenging task, particularly in the presence of occlusions. In this work, a cascaded AdaBoost based ear detection approach is proposed. In an experiment with a test set of 203 profile face images, all the ears were accurately detected by the proposed detector with a very low (5 x 10-6) false positive rate. It is also very fast and relatively robust to the presence of occlusions and degradation of the ear images (e.g. motion blur). The detection process is fully automatic and does not require any manual intervention.",
"title": ""
},
{
"docid": "fef45863bc531960dbf2a7783995bfdb",
"text": "The main goal of facial attribute recognition is to determine various attributes of human faces, e.g. facial expressions, shapes of mouth and nose, headwears, age and race, by extracting features from the images of human faces. Facial attribute recognition has a wide range of potential application, including security surveillance and social networking. The available approaches, however, fail to consider the correlations and heterogeneities between different attributes. This paper proposes that by utilizing these correlations properly, an improvement can be achieved on the recognition of different attributes. Therefore, we propose a facial attribute recognition approach based on the grouping of different facial attribute tasks and a multi-task CNN structure. Our approach can fully utilize the correlations between attributes, and achieve a satisfactory recognition result on a large number of attributes with limited amount of parameters. Several modifications to the traditional architecture have been tested in the paper, and experiments have been conducted to examine the effectiveness of our approach.",
"title": ""
},
{
"docid": "2f9b8ee2f7578c7820eced92fb98c696",
"text": "The Tic tac toe is very popular game having a 3 × 3 grid board and 2 players. A Special Symbol (X or O) is assigned to each player to indicate the slot is covered by the respective player. The winner of the game is the player who first cover a horizontal, vertical and diagonal row of the board having only player's own symbols. This paper presents the design model of Tic tac toe Game using Multi-Tape Turing Machine in which both player choose input randomly and result of the game is declared. The computational Model of Tic tac toe is used to describe it in a formal manner.",
"title": ""
},
{
"docid": "3c203c55c925fb3f78506d46b8b453a8",
"text": "In this paper, we provide combinatorial interpretations for some determinantal identities involving Fibonacci numbers. We use the method due to Lindström-Gessel-Viennot in which we count nonintersecting n-routes in carefully chosen digraphs in order to gain insight into the nature of some well-known determinantal identities while allowing room to generalize and discover new ones.",
"title": ""
},
{
"docid": "5705022b0a08ca99d4419485f3c03eaa",
"text": "In this paper, we propose a wireless sensor network paradigm for real-time forest fire detection. The wireless sensor network can detect and forecast forest fire more promptly than the traditional satellite-based detection approach. This paper mainly describes the data collecting and processing in wireless sensor networks for real-time forest fire detection. A neural network method is applied to in-network data processing. We evaluate the performance of our approach by simulations.",
"title": ""
},
{
"docid": "673674dd11047747db79e5614daa4974",
"text": "Distracted driving is one of the main causes of vehicle collisions in the United States. Passively monitoring a driver's activities constitutes the basis of an automobile safety system that can potentially reduce the number of accidents by estimating the driver's focus of attention. This paper proposes an inexpensive vision-based system to accurately detect Eyes Off the Road (EOR). The system has three main components: 1) robust facial feature tracking; 2) head pose and gaze estimation; and 3) 3-D geometric reasoning to detect EOR. From the video stream of a camera installed on the steering wheel column, our system tracks facial features from the driver's face. Using the tracked landmarks and a 3-D face model, the system computes head pose and gaze direction. The head pose estimation algorithm is robust to nonrigid face deformations due to changes in expressions. Finally, using a 3-D geometric analysis, the system reliably detects EOR.",
"title": ""
},
{
"docid": "c281538d7aa7bd8727ce4718de82c7c8",
"text": "More than 15 years after model predictive control (MPC) appeared in industry as an effective means to deal with multivariable constrained control problems, a theoretical basis for this technique has started to emerge. The issues of feasibility of the on-line optimization, stability and performance are largely understood for systems described by linear models. Much progress has been made on these issues for non-linear systems but for practical applications many questions remain, including the reliability and efficiency of the on-line computation scheme. To deal with model uncertainty ‘rigorously’ an involved dynamic programming problem must be solved. The approximation techniques proposed for this purpose are largely at a conceptual stage. Among the broader research needs the following areas are identified: multivariable system identification, performance monitoring and diagnostics, non-linear state estimation, and batch system control. Many practical problems like control objective prioritization and symptom-aided diagnosis can be integrated systematically and effectively into the MPC framework by expanding the problem formulation to include integer variables yielding a mixed-integer quadratic or linear program. Efficient techniques for solving these problems are becoming available. © 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fa2ba8897c9dcd087ea01de2caaed9e4",
"text": "This paper aims to investigate the relationship between library anxiety and emotional intelligence of Bushehr University of Medical Sciences’ students and Persian Gulf University’s students in Bushehr municipality. In this descriptive study which is of correlation type, 700 students of Bushehr University of Medical Sciences and the Persian Gulf University selected through stratified random sampling. Required data has been collected using normalized Siberia Shrink’s emotional intelligence questionnaire and localized Bostick’s library anxiety scale. The results show that the rate of library anxiety among students is less than average (91.73%) except “mechanical factors”. There is not a significant difference in all factors of library anxiety except “interaction with librarian” between male and female. The findings also indicate that there is a negative significant relationship between library anxiety and emotional intelligence (r= -0.41). According to the results, it seems that by improving the emotional intelligence we can decrease the rate of library anxiety among students during their search in a library. Emotional intelligence can optimize academic library’s productivity.",
"title": ""
},
{
"docid": "dcee61dad66f59b2450a3e154726d6b1",
"text": "Mussels are marine organisms that have been mimicked due to their exceptional adhesive properties to all kind of surfaces, including rocks, under wet conditions. The proteins present on the mussel's foot contain 3,4-dihydroxy-l-alanine (DOPA), an amino acid from the catechol family that has been reported by their adhesive character. Therefore, we synthesized a mussel-inspired conjugated polymer, modifying the backbone of hyaluronic acid with dopamine by carbodiimide chemistry. Ultraviolet-visible (UV-vis) spectroscopy and nuclear magnetic resonance (NMR) techniques confirmed the success of this modification. Different techniques have been reported to produce two-dimensional (2D) or three-dimensional (3D) systems capable to support cells and tissue regeneration; among others, multilayer systems allow the construction of hierarchical structures from nano- to macroscales. In this study, the layer-by-layer (LbL) technique was used to produce freestanding multilayer membranes made uniquely of chitosan and dopamine-modified hyaluronic acid (HA-DN). The electrostatic interactions were found to be the main forces involved in the film construction. The surface morphology, chemistry, and mechanical properties of the freestanding membranes were characterized, confirming the enhancement of the adhesive properties in the presence of HA-DN. The MC3T3-E1 cell line was cultured on the surface of the membranes, demonstrating the potential of these freestanding multilayer systems to be used for bone tissue engineering.",
"title": ""
},
{
"docid": "31dbf3fcd1a70ad7fb32fb6e69ef88e3",
"text": "OBJECTIVE\nHealth care researchers have not taken full advantage of the potential to effectively convey meaning in their multivariate data through graphical presentation. The aim of this paper is to translate knowledge from the fields of analytical chemistry, toxicology, and marketing research to the field of medicine by introducing the radar plot, a useful graphical display method for multivariate data.\n\n\nSTUDY DESIGN AND SETTING\nDescriptive study based on literature review.\n\n\nRESULTS\nThe radar plotting technique is described, and examples are used to illustrate not only its programming language, but also the differences in tabular and bar chart approaches compared to radar-graphed data displays.\n\n\nCONCLUSION\nRadar graphing, a form of radial graphing, could have great utility in the presentation of health-related research, especially in situations in which there are large numbers of independent variables, possibly with different measurement scales. This technique has particular relevance for researchers who wish to illustrate the degree of multiple-group similarity/consensus, or group differences on multiple variables in a single graphical display.",
"title": ""
},
{
"docid": "206263c06b0d41725aeec7844f3b3a01",
"text": "Basic properties of the operational transconductance amplifier (OTA) are discussed. Applications of the OTA in voltage-controlled amplifiers, filters, and impedances are presented. A versatile family of voltage-controlled filter sections suitable for systematic design requirements is described. The total number of components used in these circuits is small, and the design equations and voltage-control characteristics are attractive. Limitations as well as practical considerations of OTA-based filters using commercially available bipolar OTAs are discussed. Applications of OTAs in continuous-time monolithic filters are considered.",
"title": ""
},
{
"docid": "9b3a39ddeadd14ea5a50be8ac2057a26",
"text": "0 7 4 0 7 4 5 9 / 0 0 / $ 1 0 . 0 0 © 2 0 0 0 I E E E J u l y / A u g u s t 2 0 0 0 I E E E S O F T W A R E 19 design, algorithm, code, or test—does indeed improve software quality and reduce time to market. Additionally, student and professional programmers consistently find pair programming more enjoyable than working alone. Yet most who have not tried and tested pair programming reject the idea as a redundant, wasteful use of programming resources: “Why would I put two people on a job that just one can do? I can’t afford to do that!” But we have found, as Larry Constantine wrote, that “Two programmers in tandem is not redundancy; it’s a direct route to greater efficiency and better quality.”1 Our supportive evidence comes from professional programmers and from advanced undergraduate students who participated in a structured experiment. The experimental results show that programming pairs develop better code faster with only a minimal increase in prerelease programmer hours. These results apply to all levels of programming skill from novice to expert.",
"title": ""
},
{
"docid": "d76b7b25bce29cdac24015f8fa8ee5bb",
"text": "A circularly polarized magnetoelectric dipole antenna with high efficiency based on printed ridge gap waveguide is presented. The antenna gain is improved by using a wideband lens in front of the antennas. The lens consists of three layers dual-polarized mu-near zero (MNZ) inclusions. Each layer consists of a <inline-formula> <tex-math notation=\"LaTeX\">$3\\times4$ </tex-math></inline-formula> MNZ unit cell. The measured results indicate that the magnitude of <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}$ </tex-math></inline-formula> is below −10 dB in the frequency range of 29.5–37 GHz. The resulting 3-dB axial ratio is over a frequency range of 32.5–35 GHz. The measured realized gain of the antenna is more than 10 dBi over a frequency band of 31–35 GHz achieving a radiation efficiency of 94% at 34 GHz.",
"title": ""
},
{
"docid": "e30df718ca1981175e888755cce3ce90",
"text": "Human identification at distance by analysis of gait patterns extracted from video has recently become very popular research in biometrics. This paper presents multi-projections based approach to extract gait patterns for human recognition. Binarized silhouette of a motion object is represented by 1-D signals which are the basic image features called the distance vectors. The distance vectors are differences between the bounding box and silhouette, and extracted using four projections to silhouette. Eigenspace transformation is applied to time-varying distance vectors and the statistical distance based supervised pattern classification is then performed in the lower-dimensional eigenspace for human identification. A fusion strategy developed is finally executed to produce final decision. Based on normalized correlation on the distance vectors, gait cycle estimation is also performed to extract the gait cycle. Experimental results on four databases demonstrate that the right person in top two matches 100% of the times for the cases where training and testing sets corresponds to the same walking styles, and in top three-four matches 100% of the times for training and testing sets corresponds to the different walking styles.",
"title": ""
},
{
"docid": "c5eb252d17c2bec8ab168ca79ec11321",
"text": "Personalization is pervasive in the online space as it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user. However, recent studies suggest that personalization methods can propagate societal or systemic biases and polarize opinions; this has led to calls for regulatory mechanisms and algorithms to combat bias and inequality. Algorithmically, bandit optimization has enjoyed great success in learning user preferences and personalizing content or feeds accordingly. We propose an algorithmic framework that allows for the possibility to control bias or discrimination in such bandit-based personalization. Our model allows for the specification of general fairness constraints on the sensitive types of the content that can be displayed to a user. The challenge, however, is to come up with a scalable and low regret algorithm for the constrained optimization problem that arises. Our main technical contribution is a provably fast and low-regret algorithm for the fairness-constrained bandit optimization problem. Our proofs crucially leverage the special structure of our problem. Experiments on synthetic and real-world data sets show that our algorithmic framework can control bias with only a minor loss to revenue. ∗A short version of this paper appeared in the FAT/ML 2017 workshop (https://arxiv.org/abs/1707.02260) 1 ar X iv :1 80 2. 08 67 4v 1 [ cs .L G ] 2 3 Fe b 20 18",
"title": ""
},
{
"docid": "3db3308b3f98563390e8f21e565798b7",
"text": "RDF question/answering (Q/A) allows users to ask questions in natural languages over a knowledge base represented by RDF. To answer a natural language question, the existing work takes a two-stage approach: question understanding and query evaluation. Their focus is on question understanding to deal with the disambiguation of the natural language phrases. The most common technique is the joint disambiguation, which has the exponential search space. In this paper, we propose a systematic framework to answer natural language questions over RDF repository (RDF Q/A) from a graph data-driven perspective. We propose a semantic query graph to model the query intention in the natural language question in a structural way, based on which, RDF Q/A is reduced to subgraph matching problem. More importantly, we resolve the ambiguity of natural language questions at the time when matches of query are found. The cost of disambiguation is saved if there are no matching found. More specifically, we propose two different frameworks to build the semantic query graph, one is relation (edge)-first and the other one is node-first. We compare our method with some state-of-the-art RDF Q/A systems in the benchmark dataset. Extensive experiments confirm that our method not only improves the precision but also speeds up query performance greatly.",
"title": ""
},
{
"docid": "23ff0b54dcef99754549275eb6714a9a",
"text": "The HCI community has developed guidelines and recommendations for improving the usability system that are usually applied at the last stages of the software development process. On the other hand, the SE community has developed sound methods to elicit functional requirements in the early stages, but usability has been relegated to the last stages together with other nonfunctional requirements. Therefore, there are no methods of usability requirements elicitation to develop software within both communities. An example of this problem arises if we focus on the Model-Driven Development paradigm, where the methods and tools that are used to develop software do not support usability requirements elicitation. In order to study the existing publications that deal with usability requirements from the first steps of the software development process, this work presents a mapping study. Our aim is to compare usability requirements methods and to identify the strong points of each one.",
"title": ""
},
{
"docid": "a6d550a64dc633e50ee2b21255344e7b",
"text": "Sentiment classification is a much-researched field that identifies positive or negative emotions in a large number of texts. Most existing studies focus on document-based approaches and documents are represented as bag-of word. Therefore, this feature representation fails to obtain the relation or associative information between words and it can't distinguish different opinions of a sentiment word with different targets. In this paper, we present a dependency tree-based sentence-level sentiment classification approach. In contrast to a document, a sentence just contains little information and a small set of features which can be used for the sentiment classification. So we not only capture flat features (bag-of-word), but also extract structured features from the dependency tree of a sentence. We propose a method to add more information to the dependency tree and provide an algorithm to prune dependency tree to reduce the noisy, and then introduce a convolution tree kernel-based approach to the sentence-level sentiment classification. The experimental results show that our dependency tree-based approach achieved significant improvement, particularly for implicit sentiment classification.",
"title": ""
}
] |
scidocsrr
|
797ec259cf5128e687eb9748f3e338f9
|
Chronic insomnia and its negative consequences for health and functioning of adolescents: a 12-month prospective study.
|
[
{
"docid": "b6bf6c87040bc4996315fee62acb911b",
"text": "The influence of the sleep patterns of 2,259 students, aged 11 to 14 years, on trajectories of depressive symptoms, self-esteem, and grades was longitudinally examined using latent growth cross-domain models. Consistent with previous research, sleep decreased over time. Students who obtained less sleep in sixth grade exhibited lower initial self-esteem and grades and higher initial levels of depressive symptoms. Similarly, students who obtained less sleep over time reported heightened levels of depressive symptoms and decreased self-esteem. Sex of the student played a strong role as a predictor of hours of sleep, self-esteem, and grades. This study underscores the role of sleep in predicting adolescents' psychosocial outcomes and highlights the importance of using idiographic methodologies in the study of developmental processes.",
"title": ""
}
] |
[
{
"docid": "e510140bfc93089e69cb762b968de5e9",
"text": "Owing to the popularity of the PDF format and the continued exploitation of Adobe Reader, the detection of malicious PDFs remains a concern. All existing detection techniques rely on the PDF parser to a certain extent, while the complexity of the PDF format leaves an abundant space for parser confusion. To quantify the difference between these parsers and Adobe Reader, we create a reference JavaScript extractor by directly tapping into Adobe Reader at locations identified through a mostly automatic binary analysis technique. By comparing the output of this reference extractor against that of several opensource JavaScript extractors on a large data set obtained from VirusTotal, we are able to identify hundreds of samples which existing extractors fail to extract JavaScript from. By analyzing these samples we are able to identify several weaknesses in each of these extractors. Based on these lessons, we apply several obfuscations on a malicious PDF sample, which can successfully evade all the malware detectors tested. We call this evasion technique a PDF parser confusion attack. Lastly, we demonstrate that the reference JavaScript extractor improves the accuracy of existing JavaScript-based classifiers and how it can be used to mitigate these parser limitations in a real-world setting.",
"title": ""
},
{
"docid": "e9838d3c33d19bdd20a001864a878757",
"text": "FPGAs are increasingly popular as application-specific accelerators because they lead to a good balance between flexibility and energy efficiency, compared to CPUs and ASICs. However, the long routing time imposes a barrier on FPGA computing, which significantly hinders the design productivity. Existing attempts of parallelizing the FPGA routing either do not fully exploit the parallelism or suffer from an excessive quality loss. Massive parallelism using GPUs has the potential to solve this issue but faces non-trivial challenges.\n To cope with these challenges, this work presents Corolla, a GPU-accelerated FPGA routing method. Corolla enables applying the GPU-friendly shortest path algorithm in FPGA routing, leveraging the idea of problem size reduction by limiting the search in routing subgraphs. We maintain the convergence after problem size reduction using the dynamic expansion of the routing resource subgraphs. In addition, Corolla explores the fine-grained single-net parallelism and proposes a hybrid approach to combine the static and dynamic parallelism on GPU. To explore the coarse-grained multi-net parallelism, Corolla proposes an effective method to parallelize mutli-net routing while preserving the equivalent routing results as the original single-net routing. Experimental results show that Corolla achieves an average of 18.72x speedup on GPU with a tolerable loss in the routing quality and sustains a scalable speedup on large-scale routing graphs. To our knowledge, this is the first work to demonstrate the effectiveness of GPU-accelerated FPGA routing.",
"title": ""
},
{
"docid": "77d11e0b66f3543fadf91d0de4c928c9",
"text": "In the United States, the number of people over 65 will double between ow and 2030 to 69.4 million. Providing care for this increasing population becomes increasingly difficult as the cognitive and physical health of elders deteriorates. This survey article describes ome of the factors that contribute to the institutionalization of elders, and then presents some of the work done towards providing technological support for this vulnerable community.",
"title": ""
},
{
"docid": "02855c493744435d868d669a6ddedd1c",
"text": "Recurrent neural networks (RNNs), particularly long short-term memory (LSTM), have gained much attention in automatic speech recognition (ASR). Although some successful stories have been reported, training RNNs remains highly challenging, especially with limited training data. Recent research found that a well-trained model can be used as a teacher to train other child models, by using the predictions generated by the teacher model as supervision. This knowledge transfer learning has been employed to train simple neural nets with a complex one, so that the final performance can reach a level that is infeasible to obtain by regular training. In this paper, we employ the knowledge transfer learning approach to train RNNs (precisely LSTM) using a deep neural network (DNN) model as the teacher. This is different from most of the existing research on knowledge transfer learning, since the teacher (DNN) is assumed to be weaker than the child (RNN); however, our experiments on an ASR task showed that it works fairly well: without applying any tricks on the learning scheme, this approach can train RNNs successfully even with limited training data.",
"title": ""
},
{
"docid": "c6160b8ad36bc4f297bfb1f6b04c79e0",
"text": "Despite their incentive structure flaws, mining pools account for more than 95% of Bitcoin’s computation power. This paper introduces an attack against mining pools in which a malicious party pays pool members to withhold their solutions from their pool operator. We show that an adversary with a tiny amount of computing power and capital can execute this attack. Smart contracts enforce the malicious party’s payments, and therefore miners need neither trust the attacker’s intentions nor his ability to pay. Assuming pool members are rational, an adversary with a single mining ASIC can, in theory, destroy all big mining pools without losing any money (and even make some profit).",
"title": ""
},
{
"docid": "62c6050db8e42b1de54f8d1d54fd861f",
"text": "In this paper we present our approach of solving the PAN 2016 Author Profiling Task. It involves classifying users’ gender and age using social media posts. We used SVM classifiers and neural networks on TF-IDF and verbosity features. Results showed that SVM classifiers are better for English datasets and neural networks perform better for Dutch and Spanish datasets.",
"title": ""
},
{
"docid": "3e9f54363d930c703dfe20941b2568b0",
"text": "Organizations are looking to new graduate nurses to fill expected staffing shortages over the next decade. Creative and effective onboarding programs will determine the success or failure of these graduates as they transition from student to professional nurse. This longitudinal quantitative study with repeated measures used the Casey-Fink Graduate Nurse Experience Survey to investigate the effects of offering a prelicensure extern program and postlicensure residency program on new graduate nurses and organizational outcomes versus a residency program alone. Compared with the nurse residency program alone, the combination of extern program and nurse residency program improved neither the transition factors most important to new nurse graduates during their first year of practice nor a measure important to organizations, retention rates. The additional cost of providing an extern program should be closely evaluated when making financially responsible decisions.",
"title": ""
},
{
"docid": "68971b7efc9663c37113749206b5382b",
"text": "Trehalose 6-phosphate (Tre6P), the intermediate of trehalose biosynthesis, has a profound influence on plant metabolism, growth, and development. It has been proposed that Tre6P acts as a signal of sugar availability and is possibly specific for sucrose status. Short-term sugar-feeding experiments were carried out with carbon-starved Arabidopsis thaliana seedlings grown in axenic shaking liquid cultures. Tre6P increased when seedlings were exogenously supplied with sucrose, or with hexoses that can be metabolized to sucrose, such as glucose and fructose. Conditional correlation analysis and inhibitor experiments indicated that the hexose-induced increase in Tre6P was an indirect response dependent on conversion of the hexose sugars to sucrose. Tre6P content was affected by changes in nitrogen status, but this response was also attributable to parallel changes in sucrose. The sucrose-induced rise in Tre6P was unaffected by cordycepin but almost completely blocked by cycloheximide, indicating that de novo protein synthesis is necessary for the response. There was a strong correlation between Tre6P and sucrose even in lines that constitutively express heterologous trehalose-phosphate synthase or trehalose-phosphate phosphatase, although the Tre6P:sucrose ratio was shifted higher or lower, respectively. It is proposed that the Tre6P:sucrose ratio is a critical parameter for the plant and forms part of a homeostatic mechanism to maintain sucrose levels within a range that is appropriate for the cell type and developmental stage of the plant.",
"title": ""
},
{
"docid": "555e3bbc504c7309981559a66c584097",
"text": "The hippocampus has been implicated in the regulation of anxiety and memory processes. Nevertheless, the precise contribution of its ventral (VH) and dorsal (DH) division in these issues still remains a matter of debate. The Trial 1/2 protocol in the elevated plus-maze (EPM) is a suitable approach to assess features associated with anxiety and memory. Information about the spatial environment on initial (Trial 1) exploration leads to a subsequent increase in open-arm avoidance during retesting (Trial 2). The objective of the present study was to investigate whether transient VH or DH deactivation by lidocaine microinfusion would differently interfere with the performance of EPM-naive and EPM-experienced rats. Male Wistar rats were bilaterally-implanted with guide cannulas aimed at the VH or the DH. One-week after surgery, they received vehicle or lidocaine 2.0% in 1.0 microL (0.5 microL per side) at pre-Trial 1, post-Trial 1 or pre-Trial 2. There was an increase in open-arm exploration after the intra-VH lidocaine injection on Trial 1. Intra-DH pre-Trial 2 administration of lidocaine also reduced the open-arm avoidance. No significant changes were observed in enclosed-arm entries, an EPM index of general exploratory activity. The cautious exploration of potentially dangerous environment requires VH functional integrity, suggesting a specific role for this region in modulating anxiety-related behaviors. With regard to the DH, it may be preferentially involved in learning and memory since the acquired response of inhibitory avoidance was no longer observed when lidocaine was injected pre-Trial 2.",
"title": ""
},
{
"docid": "4ec266df91a40330b704c4e10eacb820",
"text": "Recently many cases of missing children between ages 14 and 17 years are reported. Parents always worry about the possibility of kidnapping of their children. This paper proposes an Android based solution to aid parents to track their children in real time. Nowadays, most mobile phones are equipped with location services capabilities allowing us to get the device’s geographic position in real time. The proposed solution takes the advantage of the location services provided by mobile phone since most of kids carry mobile phones. The mobile application use the GPS and SMS services found in Android mobile phones. It allows the parent to get their child’s location on a real time map. The system consists of two sides, child side and parent side. A parent’s device main duty is to send a request location SMS to the child’s device to get the location of the child. On the other hand, the child’s device main responsibility is to reply the GPS position to the parent’s device upon request. Keywords—Child Tracking System, Global Positioning System (GPS), SMS-based Mobile Application.",
"title": ""
},
{
"docid": "065b0af0f1ed195ac90fa3ad041fa4c4",
"text": "We present CapWidgets, passive tangible controls for capacitive touch screens. CapWidgets bring back physical controls to off-the-shelf multi-touch surfaces as found in mobile phones and tablet computers. While the user touches the widget, the surface detects the capacitive marker on the widget's underside. We study the relative performance of this tangible interaction with direct multi-touch interaction and our experimental results show that user performance and preferences are not automatically in favor of tangible widgets and careful design is necessary to validate their properties.",
"title": ""
},
{
"docid": "16d417e6d2c75edbdf2adbed8ec8d072",
"text": "Network middleboxes are difficult to manage and troubleshoot, due to their proprietary monolithic design. Moving towards Network Functions Virtualization (NFV), virtualized middlebox appliances can be more flexibly instantiated and dynamically chained, making troubleshooting even more difficult. To guarantee carrier-grade availability and minimize outages, operators need ways to automatically verify that the deployed network and middlebox configurations obey higher level network policies. In this paper, we first define and identify the key challenges for checking the correct forwarding behavior of Service Function Chains (SFC). We then design and develop a network diagnosis framework that aids network administrators in verifying the correctness of SFC policy enforcement. Our prototype - SFC-Checker can verify stateful service chains efficiently, by analyzing the switches' forwarding rules and the middleboxes' stateful forwarding behavior. Built on top of the network function models we proposed, we develop a diagnosis algorithm that is able to check the stateful forwarding behavior of a chain of network service functions.",
"title": ""
},
{
"docid": "1a2fe54f7456c5e726f87a401a4628f3",
"text": "Starting from a neurobiological standpoint, I will propose that our capacity to understand others as intentional agents, far from being exclusively dependent upon mentalistic/linguistic abilities, be deeply grounded in the relational nature of our interactions with the world. According to this hypothesis, an implicit, prereflexive form of understanding of other individuals is based on the strong sense of identity binding us to them. We share with our conspecifics a multiplicity of states that include actions, sensations and emotions. A new conceptual tool able to capture the richness of the experiences we share with others will be introduced: the shared manifold of intersubjectivity. I will posit that it is through this shared manifold that it is possible for us to recognize other human beings as similar to us. It is just because of this shared manifold that intersubjective communication and ascription of intentionality become possible. It will be argued that the same neural structures that are involved in processing and controlling executed actions, felt sensations and emotions are also active when the same actions, sensations and emotions are to be detected in others. It therefore appears that a whole range of different \"mirror matching mechanisms\" may be present in our brain. This matching mechanism, constituted by mirror neurons originally discovered and described in the domain of action, could well be a basic organizational feature of our brain, enabling our rich and diversified intersubjective experiences. This perspective is in a position to offer a global approach to the understanding of the vulnerability to major psychoses such as schizophrenia.",
"title": ""
},
{
"docid": "7c9d35fb9cec2affbe451aed78541cef",
"text": "Dental caries, also known as dental cavities, is the most widespread pathology in the world. Up to a very recent period, almost all individuals had the experience of this pathology at least once in their life. Early detection of dental caries can help in a sharp decrease in the dental disease rate. Thanks to the growing accessibility to medical imaging, the clinical applications now have better impact on patient care. Recently, there has been interest in the application of machine learning strategies for classification and analysis of image data. In this paper, we propose a new method to detect and identify dental caries using X-ray images as dataset and deep neural network as technique. This technique is based on stacked sparse auto-encoder and a softmax classifier. Those techniques, sparse auto-encoder and softmax, are used to train a deep neural network. The novelty here is to apply deep neural network to diagnosis of dental caries. This approach was tested on a real dataset and has demonstrated a good performance of detection. Keywords-dental X-ray; classification; Deep Neural Networks; Stacked sparse auto-encoder; Softmax.",
"title": ""
},
{
"docid": "e98aefff2ab776efcc13c1d9534ec9fb",
"text": "Many software providers operate crash reporting services to automatically collect crashes from millions of customers and file bug reports. Precisely triaging crashes is necessary and important for software providers because the millions of crashes that may be reported every day are critical in identifying high impact bugs. However, the triaging accuracy of existing systems is limited, as they rely only on the syntactic information of the stack trace at the moment of a crash without analyzing program semantics.\n In this paper, we present RETracer, the first system to triage software crashes based on program semantics reconstructed from memory dumps. RETracer was designed to meet the requirements of large-scale crash reporting services. RETracer performs binary-level backward taint analysis without a recorded execution trace to understand how functions on the stack contribute to the crash. The main challenge is that the machine state at an earlier time cannot be recovered completely from a memory dump, since most instructions are information destroying.\n We have implemented RETracer for x86 and x86-64 native code, and compared it with the existing crash triaging tool used by Microsoft. We found that RETracer eliminates two thirds of triage errors based on a manual analysis of 140 bugs fixed in Microsoft Windows and Office. RETracer has been deployed as the main crash triaging system on Microsoft's crash reporting service.",
"title": ""
},
{
"docid": "ed3a859e2cea465a6d34c556fec860d9",
"text": "Multi-word expressions constitute a significant portion of the lexicon of every natural language, and handling them correctly is mandatory for various NLP applications. Yet such entities are notoriously hard to define, and are consequently missing from standard lexicons and dictionaries. Multi-word expressions exhibit idiosyncratic behavior on various levels: orthographic, morphological, syntactic and semantic. In this work we take advantage of the morphological and syntactic idiosyncrasy of Hebrew noun compounds and employ it to extract such expressions from text corpora. We show that relying on linguistic information dramatically improves the accuracy of compound extraction, reducing over one third of the errors compared with the best baseline.",
"title": ""
},
{
"docid": "c80dbfc2e1f676a7ffe4a6a4f7460d36",
"text": "Coarse-grained semantic categories such as supersenses have proven useful for a range of downstream tasks such as question answering or machine translation. To date, no effort has been put into integrating the supersenses into distributional word representations. We present a novel joint embedding model of words and supersenses, providing insights into the relationship between words and supersenses in the same vector space. Using these embeddings in a deep neural network model, we demonstrate that the supersense enrichment leads to a significant improvement in a range of downstream classification tasks.",
"title": ""
},
{
"docid": "e1e1005788a0133025f9f3951b9a5372",
"text": "Despite the recent success of neural networks in tasks involving natural language understanding (NLU) there has only been limited progress in some of the fundamental challenges of NLU, such as the disambiguation of the meaning and function of words in context. This work approaches this problem by incorporating contextual information into word representations prior to processing the task at hand. To this end we propose a general-purpose reading architecture that is employed prior to a task-specific NLU model. It is responsible for refining context-agnostic word representations with contextual information and lends itself to the introduction of additional, context-relevant information from external knowledge sources. We demonstrate that previously non-competitive models benefit dramatically from employing contextual representations, closing the gap between general-purpose reading architectures and the state-of-the-art performance obtained with fine-tuned, task-specific architectures. Apart from our empirical results we present a comprehensive analysis of the computed representations which gives insights into the kind of information added during the refinement process.",
"title": ""
},
{
"docid": "a3cb839b4299a50c475b2bb1b608ee91",
"text": "In this work, we present an event detection method in Twitter based on clustering of hashtags and introduce an enhancement technique by using the semantic similarities between the hashtags. To this aim, we devised two methods for tweet vector generation and evaluated their effect on clustering and event detection performance in comparison to word-based vector generation methods. By analyzing the contexts of hashtags and their co-occurrence statistics with other words, we identify their paradigmatic relationships and similarities. We make use of this information while applying a lexico-semantic expansion on tweet contents before clustering the tweets based on their similarities. Our aim is to tolerate spelling errors and capture statements which actually refer to the same concepts. We evaluate our enhancement solution on a three-day dataset of tweets with Turkish content. In our evaluations, we observe clearer clusters, improvements in accuracy, and earlier event detection times.",
"title": ""
},
{
"docid": "de2527840267fbc3bf5412498323933b",
"text": "In time series classification, signals are typically mapped into some intermediate representation which is used to construct models. We introduce the joint time-frequency scattering transform, a locally time-shift invariant representation which characterizes the multiscale energy distribution of a signal in time and frequency. It is computed through wavelet convolutions and modulus non-linearities and may therefore be implemented as a deep convolutional neural network whose filters are not learned but calculated from wavelets. We consider the progression from mel-spectrograms to time scattering and joint time-frequency scattering transforms, illustrating the relationship between increased discriminability and refinements of convolutional network architectures. The suitability of the joint time-frequency scattering transform for characterizing time series is demonstrated through applications to chirp signals and audio synthesis experiments. The proposed transform also obtains state-of-the-art results on several audio classification tasks, outperforming time scattering transforms and achieving accuracies comparable to those of fully learned networks.",
"title": ""
}
] |
scidocsrr
|
693a7765d3d98364d7d8eb154de2f31d
|
Towards a Unified Natural Language Inference Framework to Evaluate Sentence Representations
|
[
{
"docid": "e925f5fa3f6a2bdcce3712e2f8e79fe3",
"text": "Events are communicated in natural language with varying degrees of certainty. For example, if you are “hoping for a raise,” it may be somewhat less likely than if you are “expecting” one. To study these distinctions, we present scalable, highquality annotation schemes for event detection and fine-grained factuality assessment. We find that non-experts, with very little training, can reliably provide judgments about what events are mentioned and the extent to which the author thinks they actually happened. We also show how such data enables the development of regression models for fine-grained scalar factuality predictions that outperform strong baselines.",
"title": ""
}
] |
[
{
"docid": "61a2b0e51b27f46124a8042d59c0f022",
"text": "We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to \"real\" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage.",
"title": ""
},
{
"docid": "a922051835f239db76be1dbb8edead3e",
"text": "Among the simplest and most intuitively appealing classes of nonprobabilistic classification procedures are those that weight the evidence of nearby sample observations most heavily. More specifically, one might wish to weight the evidence of a neighbor close to an unclassified observation more heavily than the evidence of another neighbor which is at a greater distance from the unclassified observation. One such classification rule is described which makes use of a neighbor weighting function for the purpose of assigning a class to an unclassified sample. The admissibility of such a rule is also considered.",
"title": ""
},
{
"docid": "93a49a164437d3cc266d8e859f2bb265",
"text": "...................................................................................................................................................4",
"title": ""
},
{
"docid": "579536fe3f52f4ed244f06210a9c2cd1",
"text": "OBJECTIVE\nThis review integrates recent advances in attachment theory, affective neuroscience, developmental stress research, and infant psychiatry in order to delineate the developmental precursors of posttraumatic stress disorder.\n\n\nMETHOD\nExisting attachment, stress physiology, trauma, and neuroscience literatures were collected using Index Medicus/Medline and Psychological Abstracts. This converging interdisciplinary data was used as a theoretical base for modelling the effects of early relational trauma on the developing central and autonomic nervous system activities that drive attachment functions.\n\n\nRESULTS\nCurrent trends that integrate neuropsychiatry, infant psychiatry, and clinical psychiatry are generating more powerful models of the early genesis of a predisposition to psychiatric disorders, including PTSD. Data are presented which suggest that traumatic attachments, expressed in episodes of hyperarousal and dissociation, are imprinted into the developing limbic and autonomic nervous systems of the early maturing right brain. These enduring structural changes lead to the inefficient stress coping mechanisms that lie at the core of infant, child, and adult posttraumatic stress disorders.\n\n\nCONCLUSIONS\nDisorganised-disoriented insecure attachment, a pattern common in infants abused in the first 2 years of life, is psychologically manifest as an inability to generate a coherent strategy for coping with relational stress. Early abuse negatively impacts the developmental trajectory of the right brain, dominant for attachment, affect regulation, and stress modulation, thereby setting a template for the coping deficits of both mind and body that characterise PTSD symptomatology. These data suggest that early intervention programs can significantly alter the intergenerational transmission of posttraumatic stress disorders.",
"title": ""
},
{
"docid": "553ec50cb948fb96d96b5481ada71399",
"text": "Enormous amount of online information, available in legal domain, has made legal text processing an important area of research. In this paper, we attempt to survey different text summarization techniques that have taken place in the recent past. We put special emphasis on the issue of legal text summarization, as it is one of the most important areas in legal domain. We start with general introduction to text summarization, briefly touch the recent advances in single and multi-document summarization, and then delve into extraction based legal text summarization. We discuss different datasets and metrics used in summarization and compare performances of different approaches, first in general and then focused to legal text. we also mention highlights of different summarization techniques. We briefly cover a few software tools used in legal text summarization. We finally conclude with some future research directions.",
"title": ""
},
{
"docid": "ec1228f8ddf271e8ec5e7018e45b0e77",
"text": "The present work is focused on the systematization of a process of knowledge acquisition for its use in intelligent management systems. The result was the construction of a computational structure for use inside the institutions (Intranet) as well as out of them (Internet). This structure was called Knowledge Engineering Suite an ontological engineering tool to support the construction of antologies in a collaborative setting and was based on observations made at Semantic Web, UNL (Universal Networking Language) and WorldNet. We use a knowledge representation technique called DCKR to organize knowledge and psychoanalytic studies, focused mainly on Lacan and his language theory to develop a methodology called Engineering of Minds to improve the synchronicity between knowledge engineers and specialist in a particular knowledge domain.",
"title": ""
},
{
"docid": "82af5212b43e8dfe6d54582de621d96c",
"text": "The use of multiple radar configurations can overcome some of the geometrical limitations that exist when obtaining radar images of a target using inverse synthetic aperture radar (ISAR) techniques. It is shown here how a particular bistatic configuration can produce three view angles and three ISAR images simultaneously. A new ISAR signal model is proposed and the applicability of employing existing monostatic ISAR techniques to bistatic configurations is analytically demonstrated. An analysis of the distortion introduced by the bistatic geometry to the ISAR image point spread function (PSF) is then carried out and the limits of the applicability of ISAR techniques (without the introduction of additional signal processing) are found and discussed. Simulations and proof of concept experimental data are also provided that support the theory.",
"title": ""
},
{
"docid": "70789bc929ef7d36f9bb4a02793f38f5",
"text": "Lock managers are among the most studied components in concurrency control and transactional systems. However, one question seems to have been generally overlooked: “When there are multiple lock requests on the same object, which one(s) should be granted first?” Nearly all existing systems rely on a FIFO (first in, first out) strategy to decide which transaction(s) to grant the lock to. In this paper, however, we show that the lock scheduling choices have significant ramifications on the overall performance of a transactional system. Despite the large body of research on job scheduling outside the database context, lock scheduling presents subtle but challenging requirements that render existing results on scheduling inapt for a transactional database. By carefully studying this problem, we present the concept of contention-aware scheduling, show the hardness of the problem, and propose novel lock scheduling algorithms (LDSF and bLDSF), which guarantee a constant factor approximation of the best scheduling. We conduct extensive experiments using a popular database on both TPC-C and a microbenchmark. Compared to FIFO— the default scheduler in most database systems—our bLDSF algorithm yields up to 300x speedup in overall transaction latency. Alternatively, our LDSF algorithm, which is simpler and achieves comparable performance to bLDSF, has already been adopted by open-source community, and was chosen as the default scheduling strategy in MySQL 8.0.3+. PVLDB Reference Format: Boyu Tian, Jiamin Huang, Barzan Mozafari, Grant Schoenebeck. Contention-Aware Lock Scheduling for Transactional Databases. PVLDB, 11 (5): xxxx-yyyy, 2018. DOI: 10.1145/3177732.3177740",
"title": ""
},
{
"docid": "1e69c1aef1b194a27d150e45607abd5a",
"text": "Methods of semantic relatedness are essential for wide range of tasks such as information retrieval and text mining. This paper, concerned with these automated methods, attempts to improve Gloss Vector semantic relatedness measure for more reliable estimation of relatedness between two input concepts. Generally, this measure by considering frequency cut-off for big rams tries to remove low and high frequency words which usually do not end up being significant features. However, this naive cutting approach can lead to loss of valuable information. By employing point wise mutual information (PMI) as a measure of association between features, we will try to enforce the foregoing elimination step in a statistical fashion. Applying both approaches to the biomedical domain, using MEDLINE as corpus, MeSH as thesaurus, and available reference standard of 311 concept pairs manually rated for semantic relatedness, we will show that PMI for removing insignificant features is more effective approach than frequency cut-off.",
"title": ""
},
{
"docid": "68c1aa2e3d476f1f24064ed6f0f07fb7",
"text": "Granuloma annulare is a benign, asymptomatic, self-limited papular eruption found in patients of all ages. The primary skin lesion usually is grouped papules in an enlarging annular shape, with color ranging from flesh-colored to erythematous. The two most common types of granuloma annulare are localized, which typically is found on the lateral or dorsal surfaces of the hands and feet; and disseminated, which is widespread. Localized disease generally is self-limited and resolves within one to two years, whereas disseminated disease lasts longer. Because localized granuloma annulare is self-limited, no treatment other than reassurance may be necessary. There are no well-designed randomized controlled trials of the treatment of granuloma annulare. Treatment recommendations are based on the pathophysiology of the disease, expert opinion, and case reports only. Liquid nitrogen, injected steroids, or topical steroids under occlusion have been recommended for treatment of localized disease. Disseminated granuloma annulare may be treated with one of several systemic therapies such as dapsone, retinoids, niacinamide, antimalarials, psoralen plus ultraviolet A therapy, fumaric acid esters, tacrolimus, and pimecrolimus. Consultation with a dermatologist is recommended because of the possible toxicities of these agents.",
"title": ""
},
{
"docid": "c04171e96f62493fd75cdf379de1c2ab",
"text": "Alarming epidemiological features of Alzheimer's disease impose curative treatment rather than symptomatic relief. Drug repurposing, that is reappraisal of a substance's indications against other diseases, offers time, cost and efficiency benefits in drug development, especially when in silico techniques are used. In this study, we have used gene signatures, where up- and down-regulated gene lists summarize a cell's gene expression perturbation from a drug or disease. To cope with the inherent biological and computational noise, we used an integrative approach on five disease-related microarray data sets of hippocampal origin with three different methods of evaluating differential gene expression and four drug repurposing tools. We found a list of 27 potential anti-Alzheimer agents that were additionally processed with regard to molecular similarity, pathway/ontology enrichment and network analysis. Protein kinase C, histone deacetylase, glycogen synthase kinase 3 and arginase inhibitors appear consistently in the resultant drug list and may exert their pharmacologic action in an epidermal growth factor receptor-mediated subpathway of Alzheimer's disease.",
"title": ""
},
{
"docid": "f0a5d33084588ed4b7fc4905995f91e2",
"text": "A new microstrip dual-band polarization reconfigurable antenna is presented for wireless local area network (WLAN) systems operating at 2.4 and 5.8 GHz. The antenna consists of a square microstrip patch that is aperture coupled to a microstrip line located along the diagonal line of the patch. The dual-band operation is realized by employing the TM10 and TM30 modes of the patch antenna. Four shorting posts are inserted into the patch to adjust the frequency ratio of the two modes. The center of each edge of the patch is connected to ground via a PIN diode for polarization switching. By switching between the different states of PIN diodes, the proposed antenna can radiate either horizontal, vertical, or 45° linear polarization in the two frequency bands. Measured results on reflection coefficients and radiation patterns agree well with numerical simulations.",
"title": ""
},
{
"docid": "a4e92e4dc5d93aec4414bc650436c522",
"text": "Where you can find the compiling with continuations easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this compiling with continuations book. It is about this book that will give wellness for all people from many societies.",
"title": ""
},
{
"docid": "948e65673f679fe37027f4dc496397f8",
"text": "Online courses are growing at a tremendous rate, and although we have discovered a great deal about teaching and learning in the online environment, there is much left to learn. One variable that needs to be explored further is procrastination in online coursework. In this mixed methods study, quantitative methods were utilized to evaluate the influence of online graduate students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Additionally, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Collectively, results indicated that ability, effort, context, and luck influenced procrastination in this sample of graduate students. A discussion of these findings, implications for instructors, and recommendations for future research ensues. Online course offerings and degree programs have recently increased at a rapid rate and have gained in popularity among students (Allen & Seaman, 2010, 2011). Garrett (2007) reported that half of prospective students surveyed about postsecondary programs expressed a preference for online and hybrid programs, typically because of the flexibility and convenience (Daymont, Blau, & Campbell, 2011). Advances in learning management systems such as Blackboard have facilitated the dramatic increase in asynchronous programs. Although the research literature concerning online learning has blossomed over the past decade, much is left to learn about important variables that impact student learning and achievement. The purpose of this mixed methods study was to better understand the relationship between online graduate students’ attributional beliefs and their tendency to procrastinate. The approach to this objective was twofold. First, quantitative methods were utilized to evaluate the influence of students’ attributions for academic outcomes to ability, effort, context, and luck on their tendency to procrastinate. Second, qualitative methods were utilized to explore students’ attributional beliefs about their tendency to procrastinate in their online coursework. Journal of Interactive Online Learning Rakes, Dunn, and Rakes",
"title": ""
},
{
"docid": "9d1455f0c26812ae7bbf6d0cebd190c2",
"text": "This paper describes the design, construction, and performance analysis of an adjustable Scotch yoke mechanism mimicking the dorsoventral movement for dolphin-like robots. Since dolphins propel themselves by vertical oscillations following a sinusoidal path with alterable amplitudes, a two- motor-driven Scotch yoke mechanism is adopted as the main propulsor to generate sinusoidal oscillations, where leading screw mechanism and rack and pinion mechanism actuated by the minor motor are incorporated to independently change the length of the crank actuated by the major motor. Meanwhile, the output of the Scotch yoke, i.e., reciprocating motion, is converted into the up-and-down oscillation via rack and gear transmission. A motion control scheme based on the novel Scotch yoke is then formed and applied to achieve desired propulsion. Preliminary tests in a robotics context finally confirm the feasibility of the developed mechanism in mechanics and propulsion.",
"title": ""
},
{
"docid": "57233e0b2c7ef60cc505cd23492a2e03",
"text": "In nature, the eastern North American monarch population is known for its southward migration during the late summer/autumn from the northern USA and southern Canada to Mexico, covering thousands of miles. By simplifying and idealizing the migration of monarch butterflies, a new kind of nature-inspired metaheuristic algorithm, called monarch butterfly optimization (MBO), a first of its kind, is proposed in this paper. In MBO, all the monarch butterfly individuals are located in two distinct lands, viz. southern Canada and the northern USA (Land 1) and Mexico (Land 2). Accordingly, the positions of the monarch butterflies are updated in two ways. Firstly, the offsprings are generated (position updating) by migration operator, which can be adjusted by the migration ratio. It is followed by tuning the positions for other butterflies by means of butterfly adjusting operator. In order to keep the population unchanged and minimize fitness evaluations, the sum of the newly generated butterflies in these two ways remains equal to the original population. In order to demonstrate the superior performance of the MBO algorithm, a comparative study with five other metaheuristic algorithms through thirty-eight benchmark problems is carried out. The results clearly exhibit the capability of the MBO method toward finding the enhanced function values on most of the benchmark problems with respect to the other five algorithms. Note that the source codes of the proposed MBO algorithm are publicly available at GitHub ( https://github.com/ggw0122/Monarch-Butterfly-Optimization , C++/MATLAB) and MATLAB Central ( http://www.mathworks.com/matlabcentral/fileexchange/50828-monarch-butterfly-optimization , MATLAB).",
"title": ""
},
{
"docid": "f39d7a353e289e7aa13f060c93a81acd",
"text": "Functional magnetic resonance imaging was used to study brain regions implicated in retrieval of memories that are decades old. To probe autobiographical memory, family photographs were selected by confederates without the participant's involvement, thereby eliminating many of the variables that potentially confounded previous neuroimaging studies. We found that context-rich memories were associated with activity in lingual and precuneus gyri independently of their age. By contrast, retrosplenial cortex was more active for recent events regardless of memory vividness. Hippocampal activation was related to the richness of re-experiencing (vividness) rather than the age of the memory per se. Remote memories were associated with distributed activation along the rostrocaudal axis of the hippocampus whereas activation associated with recent memories was clustered in the anterior portion. This may explain why circumscribed lesions to the hippocampus disproportionately affect recent memories. These findings are incompatible with theories of long-term memory consolidation, and are more easily accommodated by multiple-trace theory, which posits that detailed memories are always dependent on the hippocampus.",
"title": ""
},
{
"docid": "a08e91040414d6bbec156a5ee90d854d",
"text": "MapReduce has emerged as an important paradigm for processing data in large data centers. MapReduce is a three phase algorithm comprising of Map, Shuffle and Reduce phases. Due to its widespread deployment, there have been several recent papers outlining practical schemes to improve the performance of MapReduce systems. All these efforts focus on one of the three phases to obtain performance improvement. In this paper, we consider the problem of jointly scheduling all three phases of the MapReduce process with a view of understanding the theoretical complexity of the joint scheduling and working towards practical heuristics for scheduling the tasks. We give guaranteed approximation algorithms and outline several heuristics to solve the joint scheduling problem.",
"title": ""
},
{
"docid": "dad7dbbb31f0d9d6268bfdc8303d1c9c",
"text": "This letter proposes a reconfigurable microstrip patch antenna with polarization states being switched among linear polarization (LP), left-hand (LH) and right-hand (RH) circular polarizations (CP). The CP waves are excited by two perturbation elements of loop slots in the ground plane. A p-i-n diode is placed on every slot to alter the current direction, which determines the polarization state. The influences of the slots and p-i-n diodes on antenna performance are minimized because the slots and diodes are not on the patch. The simulated and measured results verified the effectiveness of the proposed antenna configuration. The experimental bandwidths of the -10-dB reflection coefficient for LHCP and RHCP are about 60 MHz, while for LP is about 30 MHz. The bandwidths of the 3-dB axial ratio for both CP states are 20 MHz with best value of 0.5 dB at the center frequency on the broadside direction. Gains for two CP operations are 6.4 dB, and that for the LP one is 5.83 dB. This reconfigurable patch antenna with agile polarization has good performance and concise structure, which can be used for 2.4 GHz wireless communication systems.",
"title": ""
},
{
"docid": "4b4ff17023cf54fe552697ef83c83926",
"text": "Artificial intelligence has been an active branch of research for computer scientists and psychologists for 50 years. The concept of mimicking human intelligence in a computer fuels the public imagination and has led to countless academic papers, news articles and fictional works. However, public expectations remain largely unfulfilled, owing to the incredible complexity of everyday human behavior. A wide range of tools and techniques have emerged from the field of artificial intelligence, many of which are reviewed here. They include rules, frames, model-based reasoning, case-based reasoning, Bayesian updating, fuzzy logic, multiagent systems, swarm intelligence, genetic algorithms, neural networks, and hybrids such as blackboard systems. These are all ingenious, practical, and useful in various contexts. Some approaches are pre-specified and structured, while others specify only low-level behavior, leaving the intelligence to emerge through complex interactions. Some approaches are based on the use of knowledge expressed in words and symbols, whereas others use only mathematical and numerical constructions. It is proposed that there exists a spectrum of intelligent behaviors from low-level reactive systems through to high-level systems that encapsulate specialist expertise. Separate branches of research have made strides at both ends of the spectrum, but difficulties remain in devising a system that spans the full spectrum of intelligent behavior, including the difficult areas in the middle that include common sense and perception. Artificial intelligence is increasingly appearing in situated systems that interact with their physical environment. As these systems become more compact they are likely to become embedded into everyday equipment. As the 50th anniversary approaches of the Dartmouth conference where the term ‘artificial intelligence’ was first published, it is concluded that the field is in good shape and has delivered some great results. Yet human thought processes are incredibly complex, and mimicking them convincingly remains an elusive challenge. ADVANCES IN COMPUTERS, VOL. 65 1 Copyright © 2005 Elsevier Inc. ISSN: 0065-2458/DOI 10.1016/S0065-2458(05)65001-2 All rights reserved.",
"title": ""
}
] |
scidocsrr
|
b6cae67c818937f9541d78b6b8472b86
|
Parallel Selective Algorithms for Nonconvex Big Data Optimization
|
[
{
"docid": "e2a9bb49fd88071631986874ea197bc1",
"text": "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.",
"title": ""
}
] |
[
{
"docid": "dee76f07eb39e33e59608a2544215c0a",
"text": "We ask, and answer, the question of what’s computable by Turing machines equipped with time travel into the past: that is, closed timelike curves or CTCs (with no bound on their size). We focus on a model for CTCs due to Deutsch, which imposes a probabilistic consistency condition to avoid grandfather paradoxes. Our main result is that computers with CTCs can solve exactly the problems that are Turing-reducible to the halting problem, and that this is true whether we consider classical or quantum computers. Previous work, by Aaronson and Watrous, studied CTC computers with a polynomial size restriction, and showed that they solve exactly the problems in PSPACE, again in both the classical and quantum cases. Compared to the complexity setting, the main novelty of the computability setting is that not all CTCs have fixed-points, even probabilistically. Despite this, we show that the CTCs that do have fixed-points suffice to solve the halting problem, by considering fixed-point distributions involving infinite geometric series. The tricky part is to show that even quantum computers with CTCs can be simulated using a Halt oracle. For that, we need the Riesz representation theorem from functional analysis, among other tools. We also study an alternative model of CTCs, due to Lloyd et al., which uses postselection to “simulate” a consistency condition, and which yields BPPpath in the classical case or PP in the quantum case when subject to a polynomial size restriction. With no size limit, we show that postselected CTCs yield only the computable languages if we impose a certain finiteness condition, or all languages nonadaptively reducible to the halting problem if we don’t.",
"title": ""
},
{
"docid": "9164bd704cdb8ca76d0b5f7acda9d4ef",
"text": "In this paper we present a deep neural network topology that incorporates a simple to implement transformationinvariant pooling operator (TI-POOLING). This operator is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes. Most current methods usually make use of dataset augmentation to address this issue, but this requires larger number of model parameters and more training data, and results in significantly increased training time and larger chance of under-or overfitting. The main reason for these drawbacks is that that the learned model needs to capture adequate features for all the possible transformations of the input. On the other hand, we formulate features in convolutional neural networks to be transformation-invariant. We achieve that using parallel siamese architectures for the considered transformation set and applying the TI-POOLING operator on their outputs before the fully-connected layers. We show that this topology internally finds the most optimal \"canonical\" instance of the input image for training and therefore limits the redundancy in learned features. This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.",
"title": ""
},
{
"docid": "9043a5aae40471cb9f671a33725b0072",
"text": "In a software development group of IBM Retail Store Solutions, we built a non-trivial software system based on a stable standard specification using a disciplined, rigorous unit testing and build approach based on the test- driven development (TDD) practice. Using this practice, we reduced our defect rate by about 50 percent compared to a similar system that was built using an ad-hoc unit testing approach. The project completed on time with minimal development productivity impact. Additionally, the suite of automated unit test cases created via TDD is a reusable and extendable asset that will continue to improve quality over the lifetime of the software system. The test suite will be the basis for quality checks and will serve as a quality contract between all members of the team.",
"title": ""
},
{
"docid": "08f7c7d3bc473e929b4a224636f2a887",
"text": "Some existing CNN-based methods for single-view 3D object reconstruction represent a 3D object as either a 3D voxel occupancy grid or multiple depth-mask image pairs. However, these representations are inefficient since empty voxels or background pixels are wasteful. We propose a novel approach that addresses this limitation by replacing masks with “deformation-fields”. Given a single image at an arbitrary viewpoint, a CNN predicts multiple surfaces, each in a canonical location relative to the object. Each surface comprises a depth-map and corresponding deformation-field that ensures every pixel-depth pair in the depth-map lies on the object surface. These surfaces are then fused to form the full 3D shape. During training we use a combination of perview loss and multi-view losses. The novel multi-view loss encourages the 3D points back-projected from a particular view to be consistent across views. Extensive experiments demonstrate the efficiency and efficacy of our method on single-view 3D object reconstruction.",
"title": ""
},
{
"docid": "4ddfa45a585704edcca612f188cc6b78",
"text": "This paper presents a case study of using distributed word representations, word2vec in particular, for improving performance of Named Entity Recognition for the eCommerce domain. We also demonstrate that distributed word representations trained on a smaller amount of in-domain data are more effective than word vectors trained on very large amount of out-of-domain data, and that their combination gives the best results.",
"title": ""
},
{
"docid": "5cc26542d0f4602b2b257e19443839b3",
"text": "Accurate performance evaluation of cloud computing resources is a necessary prerequisite for ensuring that quality of service parameters remain within agreed limits. In this paper, we employ both the analytical and simulation modeling to addresses the complexity of cloud computing systems. Analytical model is comprised of distinct functional submodels, the results of which are combined in an iterative manner to obtain the solution with required accuracy. Our models incorporate the important features of cloud centers such as batch arrival of user requests, resource virtualization, and realistic servicing steps, to obtain important performance metrics such as task blocking probability and total waiting time incurred on user requests. Also, our results reveal important insights for capacity planning to control delay of servicing users requests.",
"title": ""
},
{
"docid": "15ccdecd20bbd9c4b93c57717cbfb787",
"text": "As a crucial challenge for video understanding, exploiting the spatial-temporal structure of video has attracted much attention recently, especially on video captioning. Inspired by the insight that people always focus on certain interested regions of video content, we propose a novel approach which will automatically focus on regions-of-interest and catch their temporal structures. In our approach, we utilize a specific attention model to adaptively select regions-of-interest for each video frame. Then a Dual Memory Recurrent Model (DMRM) is introduced to incorporate temporal structure of global features and regions-of-interest features in parallel, which will obtain rough understanding of video content and particular information of regions-of-interest. Since the attention model could not always catch the right interests, we additionally adopt semantic supervision to attend to interested regions more correctly. We evaluate our method for video captioning on two public benchmarks: the Microsoft Video Description Corpus (MSVD) and the Montreal Video Annotation Dataset (M-VAD). The experiments demonstrate that catching temporal regions-of-interest information really enhances the representation of input videos and our approach obtains the state-of-the-art results on popular evaluation metrics like BLEU-4, CIDEr, and METEOR.",
"title": ""
},
{
"docid": "a6815743923b1f46aee28534597611a9",
"text": "Prognostics focuses on predicting the future performance of a system, specifically the time at which the system no long performs its desired functionality, its time to failure. As an important aspect of prognostics, remaining useful life (RUL) prediction estimates the remaining usable life of a system, which is essential for maintenance decision making and contingency mitigation. A significant amount of research has been reported in the literature to develop prognostics models that are able to predict a system's RUL. These models can be broadly categorized into experience-based models, date-driven models, and physics-based models. However, due to system complexity, data availability, and application constraints, there is no universally accepted best model to estimate RUL. The review part of this paper specifically focused on the development of hybrid prognostics approaches, attempting to leverage the advantages of combining the prognostics models in the aforementioned different categories for RUL prediction. The hybrid approaches reported in the literature were systematically classified by the combination and interfaces of various types of prognostics models. In the case study part, a hybrid prognostics method was proposed and applied to a battery degradation case to show the potential benefit of the hybrid prognostics approach.",
"title": ""
},
{
"docid": "1dc41e5c43fc048bc1f1451eaa1ff764",
"text": "249 words) + Body (6178 words) + 4 Figures = 7,427 Total Words Luis Fernando Molina [email protected] (217) 244-6063 Esther Resendiz [email protected] (217) 244-4174 J. Riley Edwards [email protected] (217) 244-7417 John M. Hart [email protected] (217) 244-4174 Christopher P. L. Barkan [email protected] (217) 244-6338 Narendra Ahuja [email protected] (217) 333-1837 3 Corresponding author Molina et al. 11-1442 2 ABSTRACT Individual railroad track maintenance standards and the Federal Railroad Administration (FRA)Individual railroad track maintenance standards and the Federal Railroad Administration (FRA) Track Safety Standards require periodic inspection of railway infrastructure to ensure safe and efficient operation. This inspection is a critical, but labor-intensive task that results in large annual operating expenditures and has limitations in speed, quality, objectivity, and scope. To improve the cost-effectiveness of the current inspection process, machine vision technology can be developed and used as a robust supplement to manual inspections. This paper focuses on the development and performance of machine vision algorithms designed to recognize turnout components, as well as the performance of algorithms designed to recognize and detect defects in other track components. In order to prioritize which components are the most critical for the safe operation of trains, a risk-based analysis of the FRA Accident Database was performed. Additionally, an overview of current technologies for track and turnout component condition assessment is presented. The machine vision system consists of a video acquisition system for recording digital images of track and customized algorithms to identify defects and symptomatic conditions within the images. A prototype machine vision system has been developed for automated inspection of rail anchors and cut spikes, as well as tie recognition. Experimental test results from the system have shown good reliability for recognizing ties, anchors, and cut spikes. This machine vision system, in conjunction with defect analysis and trending of historical data, will enhance the ability for longer-term predictive assessment of the health of the track system and its components. Molina et al. 11-1442 3 INTRODUCTION Railroads conduct regular inspections of their track in order to maintain safe and efficient operation. In addition to internal railroad inspection procedures, periodic track inspections are required under the Federal Railroad Administration (FRA) Track Safety Standards. The objective of this research is to investigate the feasibility of developing a machine vision system to make track inspection more efficient, effective, and objective. In addition, interim approaches to automated track inspection are possible, which will potentially lead to greater inspection effectiveness and efficiency prior to full machine vision system development and implementation. Interim solutions include video capture using vehicle-mounted cameras, image enhancement using image-processing software, and assisted automation using machine vision algorithms (1). The primary focus of this research is inspection of North American Class I railroad mainline and siding tracks, as these generally experience the highest traffic densities. High traffic densities necessitate frequent inspection and more stringent maintenance requirements, and leave railroads less time to accomplish it. This makes them the most likely locations for cost-effective investment in new, more efficient, but potentially more capital-intensive inspection technology. The algorithms currently under development will also be adaptable to many types of infrastructure and usage, including transit and some components of high-speed rail (HSR) infrastructure. The machine vision system described in this paper was developed through an interdisciplinary research collaboration at the University of Illinois at Urbana-Champaign (UIUC) between the Computer Vision and Robotics Laboratory (CVRL) at the Beckman Institute for Advanced Science and Technology and the Railroad Engineering Program in the Department of Civil and Environmental Engineering. CURRENT TRACK INSPECTION TECHNOLOGIES USING MACHINE VISION The international railroad community has undertaken significant research to develop innovative applications for advanced technologies with the objective of improving the process of visual track inspection. The development of machine vision, one such inspection technology which uses video cameras, optical sensors, and custom designed algorithms, began in the early 1990’s with work analyzing rail surface defects (2). Machine vision systems are currently in use or under development for a variety of railroad inspection tasks, both wayside and mobile, including inspection of joint bars, surface defects in the rail, rail profile, ballast profile, track gauge, intermodal loading efficiency, railcar structural components, and railcar safety appliances (1, 3-21, 23). The University of Illinois at Urbana-Champaign (UIUC) has been involved in multiple railroad machine-vision research projects sponsored by the Association of American Railroads (AAR), BNSF Railway, NEXTRANS Region V Transportation Center, and the Transportation Research Board (TRB) High-Speed Rail IDEA Program (6-11). In this section, we provide a brief overview of machine vision condition monitoring applications currently in use or under development for inspection of railway infrastructure. Railway applications of machine vision technology have three main elements: the image acquisition system, the image analysis system, and the data analysis system (1). The attributes and performance of each of these individual components determines the overall performance of a machine vision system. Therefore, the following review includes a discussion of the overall Molina et al. 11-1442 4 machine vision system, as well as approaches to image acquisition, algorithm development techniques, lighting methodologies, and experimental results. Rail Surface Defects The Institute of Digital Image Processing (IDIP) in Austria has developed a machine vision system for rail surface inspection during the rail manufacturing process (12). Currently, rail inspection is carried out by humans and complemented with eddy current systems. The objective of this machine vision system is to replace visual inspections on rail production lines. The machine vision system uses spectral image differencing procedure (SIDP) to generate threedimensional (3D) images and detect surface defects in the rails. Additionally, the cameras can capture images at speeds up to 37 miles per hour (mph) (60 kilometers per hour (kph)). Although the system is currently being used only in rail production lines, it can also be attached to an inspection vehicle for field inspection of rail. Additionally, the Institute of Intelligent Systems for Automation (ISSIA) in Italy has been researching and developing a system for detecting rail corrugation (13). The system uses images of 512x2048 pixels in resolution, artificial light, and classification of texture to identify surface defects. The system is capable of acquiring images at speeds of up to 125 mph (200 kph). Three image-processing methods have been proposed and evaluated by IISA: Gabor, wavelet, and Gabor wavelet. Gabor was selected as the preferred processing technique. Currently, the technology has been implemented through the patented system known as Visual Inspection System for Railways (VISyR). Rail Wear The Moscow Metro and the State of Common Means of Moscow developed photonic system to measure railhead wear (14). The system consists of 4 CCD cameras and 4 laser lights mounted on an inspection vehicle. The cameras are connected to a central computer that receives images every 20 nanoseconds (ns). The system extracts the profile of the rail using two methods (cut-off and tangent) and the results are ultimately compared with pre-established rail wear templates. Tie Condition The Georgetown Rail Equipment Company (GREX) has developed and commercialized a crosstie inspection system called AURORA (15). The objective of the system is to inspect and classify the condition of timber and concrete crossties. Additionally, the system can be adapted to measure rail seat abrasion (RSA) and detect defects in fastening systems. AURORA uses high-definition cameras and high-voltage lasers as part of the lighting arrangement and is capable of inspecting 70,000 ties per hour at a speed of 30-45 mph (48-72 kph). The system has been shown to replicate results obtained by track inspectors with an accuracy of 88%. Since 2008, Napier University in Sweden has been researching the use of machine vision technology for inspection of timber crossties (16). Their system evaluates the condition of the ends of the ties and classifies them into one of two categories: good or bad. This classification is performed by evaluating quantitative parameters such as the number, length, and depth of cracks, as well as the condition of the tie plate. Experimental results showed that the system has an accuracy of 90% with respect to the correct classification of ties. Future research work includes evaluation of the center portion of the ties and integration with other non-destructive testing (NDT) applications. Molina et al. 11-1442 5 In 2003, the University of Zaragoza in Spain began research on the development of machine vision techniques to inspect concrete crossties using a stereo-metric system to measure different surface shapes (17). The system is used to estimate the deviation from the required dimensional tolerances of the concrete ties in production lines. Two CCD cameras with a resolution of 768x512 pixels are used for image capture and lasers are used for artificial lighting. The system has been shown to produce reliable results, but quantifiable results were not found in the available literature. Ballast The ISS",
"title": ""
},
{
"docid": "2b3de55ff1733fac5ee8c22af210658a",
"text": "With faster connection speed, Internet users are now making social network a huge reservoir of texts, images and video clips (GIF). Sentiment analysis for such online platform can be used to predict political elections, evaluates economic indicators and so on. However, GIF sentiment analysis is quite challenging, not only because it hinges on spatio-temporal visual contentabstraction, but also for the relationship between such abstraction and final sentiment remains unknown.In this paper, we dedicated to find outsuch relationship.We proposed a SentiPairSequence basedspatiotemporal visual sentiment ontology, which forms the midlevel representations for GIFsentiment. The establishment process of SentiPair contains two steps. First, we construct the Synset Forest to define the semantic tree structure of visual sentiment label elements. Then, through theSynset Forest, we organically select and combine sentiment label elements to form a mid-level visual sentiment representation. Our experiments indicate that SentiPair outperforms other competing mid-level attributes. Using SentiPair, our analysis frameworkcan achieve satisfying prediction accuracy (72.6%). We also opened ourdataset (GSO-2015) to the research community. GSO-2015 contains more than 6,000 manually annotated GIFs out of more than 40,000 candidates. Each is labeled with both sentiment and SentiPair Sequence.",
"title": ""
},
{
"docid": "5883597258387e83c4c5b9c1e896c818",
"text": "Techniques making use of Deep Neural Networks (DNN) have recently been seen to bring large improvements in textindependent speaker recognition. In this paper, we verify that the DNN based methods result in excellent performances in the context of text-dependent speaker verification as well. We build our system on the previously introduced HMM based ivector approach, where phone models are used to obtain frame level alignment in order to collect sufficient statistics for ivector extraction. For comparison, we experiment with an alternative alignment obtained directly from the output of DNN trained for phone classification. We also experiment with DNN based bottleneck features and their combinations with standard cepstral features. Although the i-vector approach is generally considered not suitable for text-dependent speaker verification, we show that our HMM based approach combined with bottleneck features provides truly state-of-the-art performance on RSR2015 data.",
"title": ""
},
{
"docid": "be19dab37fdd4b6170816defbc550e2e",
"text": "A new continuous transverse stub (CTS) antenna array is presented in this paper. It is built using the substrate integrated waveguide (SIW) technology and designed for beam steering applications in the millimeter waveband. The proposed CTS antenna array consists of 18 stubs that are arranged in the SIW perpendicular to the wave propagation. The performance of the proposed CTS antenna array is demonstrated through simulation and measurement results. From the experimental results, the peak gain of 11.63-16.87 dBi and maximum radiation power of 96.8% are achieved in the frequency range 27.06-36 GHz with low cross-polarization level. In addition, beam steering capability is achieved in the maximum radiation angle range varying from -43° to 3 ° depending on frequency.",
"title": ""
},
{
"docid": "87e2d691570403ae36e0a9a87099ad71",
"text": "Audiovisual translation is one of several overlapping umbrella terms that include ‘media translation’, ‘multimedia translation’, ‘multimodal translation’ and ‘screen translation’. These different terms all set out to cover the interlingual transfer of verbal language when it is transmitted and accessed both visually and acoustically, usually, but not necessarily, through some kind of electronic device. Theatrical plays and opera, for example, are clearly audiovisual yet, until recently, audiences required no technological devices to access their translations; actors and singers simply acted and sang the translated versions. Nowadays, however, opera is frequently performed in the original language with surtitles in the target language projected on to the stage. Furthermore, electronic librettos placed on the back of each seat containing translations are now becoming widely available. However, to date most research in audiovisual translation has been dedicated to the field of screen translation, which, while being both audiovisual and multimedial in nature, is specifically understood to refer to the translation of films and other products for cinema, TV, video and DVD. After the introduction of the first talking pictures in the 1920s a solution needed to be found to allow films to circulate despite language barriers. How to translate film dialogues and make movie-going accessible to speakers of all languages was to become a major concern for both North American and European film directors. Today, of course, screens are no longer restricted to cinema theatres alone. Television screens, computer screens and a series of devices such as DVD players, video game consoles, GPS navigation devices and mobile phones are also able to send out audiovisual products to be translated into scores of languages. Hence, strictly speaking, screen translation includes translations for any electronic appliance with a screen; however, for the purposes of this chapter, the term will be used mainly to refer to translations for the most popular products, namely for cinema, TV, video and DVD, and videogames. The two most widespread modalities adopted for translating products for the screen are dubbing and subtitling.1 Dubbing is a process which uses the acoustic channel for translational purposes, while subtitling is visual and involves a written translation that is superimposed on to the",
"title": ""
},
{
"docid": "7cbe504e03ab802389c48109ed1f1802",
"text": "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of “one-shot learning.” Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory locationbased focusing mechanisms.",
"title": ""
},
{
"docid": "ce2f8135fe123e09b777bd147bec4bb3",
"text": "Supervised learning, e.g., classification, plays an important role in processing and organizing microblogging data. In microblogging, it is easy to mass vast quantities of unlabeled data, but would be costly to obtain labels, which are essential for supervised learning algorithms. In order to reduce the labeling cost, active learning is an effective way to select representative and informative instances to query for labels for improving the learned model. Different from traditional data in which the instances are assumed to be independent and identically distributed (i.i.d.), instances in microblogging are networked with each other. This presents both opportunities and challenges for applying active learning to microblogging data. Inspired by social correlation theories, we investigate whether social relations can help perform effective active learning on networked data. In this paper, we propose a novel Active learning framework for the classification of Networked Texts in microblogging (ActNeT). In particular, we study how to incorporate network information into text content modeling, and design strategies to select the most representative and informative instances from microblogging for labeling by taking advantage of social network structure. Experimental results on Twitter datasets show the benefit of incorporating network information in active learning and that the proposed framework outperforms existing state-of-the-art methods.",
"title": ""
},
{
"docid": "d9b261d1ed01f40ca22e7955c015d72c",
"text": "A series of experiments has investigated the relationship between the playing of background music during the performance of repetitive work and efficiency in performing such a task. The results give strong support to the contention that economic benefits can accure from the use of music in industry. The studies show that music is effective in raising efficiency in this type of work even when in competition with the unfavourable conditions produced by machine noise.",
"title": ""
},
{
"docid": "8a9191c256f62b7efce93033752059e6",
"text": "Food products fermented by lactic acid bacteria have long been used for their proposed health promoting properties. In recent years, selected probiotic strains have been thoroughly investigated for specific health effects. Properties like relief of lactose intolerance symptoms and shortening of rotavirus diarrhoea are now widely accepted for selected probiotics. Some areas, such as the treatment and prevention of atopy hold great promise. However, many proposed health effects still need additional investigation. In particular the potential benefits for the healthy consumer, the main market for probiotic products, requires more attention. Also, the potential use of probiotics outside the gastrointestinal tract deserves to be explored further. Results from well conducted clinical studies will expand and increase the acceptance of probiotics for the treatment and prevention of selected diseases.",
"title": ""
},
{
"docid": "77e5724ff3b8984a1296731848396701",
"text": "Temporal networks, i.e., networks in which the interactions among a set of elementary units change over time, can be modelled in terms of timevarying graphs, which are time-ordered sequences of graphs over a set of nodes. In such graphs, the concepts of node adjacency and reachability crucially depend on the exact temporal ordering of the links. Consequently, all the concepts and metrics proposed and used for the characterisation of static complex networks have to be redefined or appropriately extended to time-varying graphs, in order to take into account the effects of time ordering on causality. In this chapter we V. Nicosia ( ) Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK e-mail: [email protected] Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy J. Tang C. Mascolo Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK M. Musolesi ( ) School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK e-mail: [email protected] G. Russo Dipartimento di Matematica e Informatica, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy V. Latora Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy School of Mathematical Sciences, Queen Mary, University of London, E1 4NS London, UK Dipartimento di Fisica e Astronomia and INFN, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy P. Holme and J. Saramäki (eds.), Temporal Networks, Understanding Complex Systems, DOI 10.1007/978-3-642-36461-7 2, © Springer-Verlag Berlin Heidelberg 2013 15 16 V. Nicosia et al. discuss how to represent temporal networks and we review the definitions of walks, paths, connectedness and connected components valid for graphs in which the links fluctuate over time. We then focus on temporal node–node distance, and we discuss how to characterise link persistence and the temporal small-world behaviour in this class of networks. Finally, we discuss the extension of classic centrality measures, including closeness, betweenness and spectral centrality, to the case of time-varying graphs, and we review the work on temporal motifs analysis and the definition of modularity for temporal graphs.",
"title": ""
},
{
"docid": "85a01086e72befaccff9b8741b920fdf",
"text": "While search engines are the major sources of content discovery on online content providers and e-commerce sites, their capability is limited since textual descriptions cannot fully describe the semantic of content such as videos. Recommendation systems are now widely used in online content providers and e-commerce sites and play an important role in discovering content. In this paper, we describe how one can boost the popularity of a video through the recommendation system in YouTube. We present a model that captures the view propagation between videos through the recommendation linkage and quantifies the influence that a video has on the popularity of another video. Furthermore, we identify that the similarity in titles and tags is an important factor in forming the recommendation linkage between videos. This suggests that one can manipulate the metadata of a video to boost its popularity.",
"title": ""
},
{
"docid": "9d87c71c136264a03a74139417bd7a1e",
"text": "Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a “fast” reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose (“slow”) RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the “fast” RL algorithm on the current (previously unseen) MDP. We evaluate RL experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-armed bandit problems and finite MDPs. After RL is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the largescale side, we test RL on a vision-based navigation task and show that it scales up to high-dimensional problems.",
"title": ""
}
] |
scidocsrr
|
ab5ee24b16e16adfa05daec9c1e3ed52
|
Computational models for text summarization
|
[
{
"docid": "1c68002a59372ea3b206ea6127545670",
"text": "We present a novel graph-based summarization framework (Opinosis) that generates concise abstractive summaries of highly redundant opinions. Evaluation results on summarizing user reviews show that Opinosis summaries have better agreement with human summaries compared to the baseline extractive method. The summaries are readable, reasonably well-formed and are informative enough to convey the major opinions.",
"title": ""
},
{
"docid": "46b6b08d160a95e42c187f756a6c3977",
"text": "We have created layers of annotation on the English Gigaword v.5 corpus to render it useful as a standardized corpus for knowledge extraction and distributional semantics. Most existing large-scale work is based on inconsistent corpora which often have needed to be re-annotated by research teams independently, each time introducing biases that manifest as results that are only comparable at a high level. We provide to the community a public reference set based on current state-of-the-art syntactic analysis and coreference resolution, along with an interface for programmatic access. Our goal is to enable broader involvement in large-scale knowledge-acquisition efforts by researchers that otherwise may not have had the ability to produce such a resource on their own.",
"title": ""
},
{
"docid": "00ccf224c9188cf26f1da60ec9aa741b",
"text": "In recent years, distributed representations of inputs have led to performance gains in many applications by allowing statistical information to be shared across inputs. However, the predicted outputs (labels, and more generally structures) are still treated as discrete objects even though outputs are often not discrete units of meaning. In this paper, we present a new formulation for structured prediction where we represent individual labels in a structure as dense vectors and allow semantically similar labels to share parameters. We extend this representation to larger structures by defining compositionality using tensor products to give a natural generalization of standard structured prediction approaches. We define a learning objective for jointly learning the model parameters and the label vectors and propose an alternating minimization algorithm for learning. We show that our formulation outperforms structural SVM baselines in two tasks: multiclass document classification and part-of-speech tagging.",
"title": ""
}
] |
[
{
"docid": "8b764c3b6576e8334979503d9d76a8d3",
"text": "Twitter is a well-known micro-blogging website which allows millions of users to interact over different types of communities, topics, and tweeting trends. The big data being generated on Twitter daily, and its significant impact on social networking, has motivated the application of data mining (analysis) to extract useful information from tweets. In this paper, we analyze the impact of tweets in predicting the winner of the recent 2013 election held in Pakistan. We identify relevant Twitter users, pre-process their tweets, and construct predictive models for three representative political parties which were significantly tweeted, i.e., Pakistan Tehreek-e-Insaaf (PTI), Pakistan Muslim League Nawaz (PMLN), and Muttahida Qaumi Movement (MQM). The predictions for last four days before the elections showed that PTI will emerge as the election winner, which was actually won by PMLN. However, considering that PTI obtained landslide victory in one province and bagged several important seats across the country, we conclude that Twitter can have some type of a positive influence on the election result, although it cannot be considered representative of the overall voting population.",
"title": ""
},
{
"docid": "719fc4274008294688257ea36f0d5661",
"text": "BACKGROUND\nGlobally, mobile phones have achieved wide reach at an unprecedented rate, and mobile phone apps have become increasingly prevalent among users. The number of health-related apps that were published on the two leading platforms (iOS and Android) reached more than 100,000 in 2014. However, there is a lack of synthesized evidence regarding the effectiveness of mobile phone apps in changing people's health-related behaviors.\n\n\nOBJECTIVE\nThe aim was to examine the effectiveness of mobile phone apps in achieving health-related behavior change in a broader range of interventions and the quality of the reported studies.\n\n\nMETHODS\nWe conducted a comprehensive bibliographic search of articles on health behavior change using mobile phone apps in peer-reviewed journals published between January 1, 2010 and June 1, 2015. Databases searched included Medline, PreMedline, PsycINFO, Embase, Health Technology Assessment, Education Resource Information Center (ERIC), and Cumulative Index to Nursing and Allied Health Literature (CINAHL). Articles published in the Journal of Medical Internet Research during that same period were hand-searched on the journal's website. Behavior change mechanisms were coded and analyzed. The quality of each included study was assessed by the Cochrane Risk of Bias Assessment Tool.\n\n\nRESULTS\nA total of 23 articles met the inclusion criteria, arranged under 11 themes according to their target behaviors. All studies were conducted in high-income countries. Of these, 17 studies reported statistically significant effects in the direction of targeted behavior change; 19 studies included in this analysis had a 65% or greater retention rate in the intervention group (range 60%-100%); 6 studies reported using behavior change theories with the theory of planned behavior being the most commonly used (in 3 studies). Self-monitoring was the most common behavior change technique applied (in 12 studies). The studies suggest that some features improve the effectiveness of apps, such as less time consumption, user-friendly design, real-time feedback, individualized elements, detailed information, and health professional involvement. All studies were assessed as having some risk of bias.\n\n\nCONCLUSIONS\nOur results provide a snapshot of the current evidence of effectiveness for a range of health-related apps. Large sample, high-quality, adequately powered, randomized controlled trials are required. In light of the bias evident in the included studies, better reporting of health-related app interventions is also required. The widespread adoption of mobile phones highlights a significant opportunity to impact health behaviors globally, particularly in low- and middle-income countries.",
"title": ""
},
{
"docid": "9962a2910be250ae1e9e702c3aa19a57",
"text": "Security issues related to the cloud computing are relevant to various stakeholders for an informed cloud adoption decision. Apart from data breaches, the cyber security research community is revisiting the attack space for cloud-specific solutions as these issues affect budget, resource management, and service quality. Distributed Denial of Service (DDoS) attack is one such serious attack in the cloud space. In this paper, we present developments related to DDoS attack mitigation solutions in the cloud. In particular, we present a comprehensive survey with a detailed insight into the characterization, prevention, detection, and mitigation mechanisms of these attacks. Additionally, we present a comprehensive solution taxonomy to classify DDoS attack solutions. We also provide a comprehensive discussion on important metrics to evaluate various solutions. This survey concludes that there is a strong requirement of solutions, which are designed keeping utility computing models in mind. Accurate auto-scaling decisions, multi-layer mitigation, and defense using profound resources in the cloud, are some of the key requirements of the desired solutions. In the end, we provide a definite guideline on effective solution building and detailed solution requirements to help the cyber security research community in designing defense mechanisms. To the best of our knowledge, this work is a novel attempt to identify the need of DDoS mitigation solutions involving multi-level information flow and effective resource management during the attack.",
"title": ""
},
{
"docid": "12ee117f58c5bd5b6794de581bfcacdb",
"text": "The visualization of complex network traffic involving a large number of communication devices is a common yet challenging task. Traditional layout methods create the network graph with overwhelming visual clutter, which hinders the network understanding and traffic analysis tasks. The existing graph simplification algorithms (e.g. community-based clustering) can effectively reduce the visual complexity, but lead to less meaningful traffic representations. In this paper, we introduce a new method to the traffic monitoring and anomaly analysis of large networks, namely Structural Equivalence Grouping (SEG). Based on the intrinsic nature of the computer network traffic, SEG condenses the graph by more than 20 times while preserving the critical connectivity information. Computationally, SEG has a linear time complexity and supports undirected, directed and weighted traffic graphs up to a million nodes. We have built a Network Security and Anomaly Visualization (NSAV) tool based on SEG and conducted case studies in several real-world scenarios to show the effectiveness of our technique.",
"title": ""
},
{
"docid": "4762cbac8a7e941f26bce8217cf29060",
"text": "The 2-D maximum entropy method not only considers the distribution of the gray information, but also takes advantage of the spatial neighbor information with using the 2-D histogram of the image. As a global threshold method, it often gets ideal segmentation results even when the image s signal noise ratio (SNR) is low. However, its time-consuming computation is often an obstacle in real time application systems. In this paper, the image thresholding approach based on the index of entropy maximization of the 2-D grayscale histogram is proposed to deal with infrared image. The threshold vector (t, s), where t is a threshold for pixel intensity and s is another threshold for the local average intensity of pixels, is obtained through a new optimization algorithm, namely, the particle swarm optimization (PSO) algorithm. PSO algorithm is realized successfully in the process of solving the 2-D maximum entropy problem. The experiments of segmenting the infrared images are illustrated to show that the proposed method can get ideal segmentation result with less computation cost. 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b6a8abc8946f8b13a22e3bacd2a6caa5",
"text": "The aim of this research was to determine the sun protection factor (SPF) of sunscreens emulsions containing chemical and physical sunscreens by ultraviolet spectrophotometry. Ten different commercially available samples of sunscreen emulsions of various manufactures were evaluated. The SPF labeled values were in the range of 8 to 30. The SPF values of the 30% of the analyzed samples are in close agreement with the labeled SPF, 30% presented SPF values above the labeled amount and 40% presented SPF values under the labeled amount. The proposed spectrophotometric method is simple and rapid for the in vitro determination of SPF values of sunscreens emulsions. *Correspondence:",
"title": ""
},
{
"docid": "1cae7b0548fc84f00cd36cae1b6f1ceb",
"text": "Combining abstract, symbolic reasoning with continuous neural reasoning is a grand challenge of representation learning. As a step in this direction, we propose a new architecture, called neural equivalence networks, for the problem of learning continuous semantic representations of algebraic and logical expressions. These networks are trained to represent semantic equivalence, even of expressions that are syntactically very different. The challenge is that semantic representations must be computed in a syntax-directed manner, because semantics is compositional, but at the same time, small changes in syntax can lead to very large changes in semantics, which can be difficult for continuous neural architectures. We perform an exhaustive evaluation on the task of checking equivalence on a highly diverse class of symbolic algebraic and boolean expression types, showing that our model significantly outperforms existing architectures.",
"title": ""
},
{
"docid": "3021a6be2aab29e18f1fe7e77c59a1d8",
"text": "We demonstrate the advantage of specializing semantic word embeddings for either similarity or relatedness. We compare two variants of retrofitting and a joint-learning approach, and find that all three yield specialized semantic spaces that capture human intuitions regarding similarity and relatedness better than unspecialized spaces. We also show that using specialized spaces in NLP tasks and applications leads to clear improvements, for document classification and synonym selection, which rely on either similarity or relatedness but not both.",
"title": ""
},
{
"docid": "2febfc549459450164bfa89f0a6ca964",
"text": "This paper discusses the effectiveness of deep auto-encoder neural networks in visual reinforcement learning (RL) tasks. We propose a framework for combining the training of deep auto-encoders (for learning compact feature spaces) with recently-proposed batch-mode RL algorithms (for learning policies). An emphasis is put on the data-efficiency of this combination and on studying the properties of the feature spaces automatically constructed by the deep auto-encoders. These feature spaces are empirically shown to adequately resemble existing similarities and spatial relations between observations and allow to learn useful policies. We propose several methods for improving the topology of the feature spaces making use of task-dependent information. Finally, we present first results on successfully learning good control policies directly on synthesized and real images.",
"title": ""
},
{
"docid": "95612aa090b77fc660279c5f2886738d",
"text": "Healthy biological systems exhibit complex patterns of variability that can be described by mathematical chaos. Heart rate variability (HRV) consists of changes in the time intervals between consecutive heartbeats called interbeat intervals (IBIs). A healthy heart is not a metronome. The oscillations of a healthy heart are complex and constantly changing, which allow the cardiovascular system to rapidly adjust to sudden physical and psychological challenges to homeostasis. This article briefly reviews current perspectives on the mechanisms that generate 24 h, short-term (~5 min), and ultra-short-term (<5 min) HRV, the importance of HRV, and its implications for health and performance. The authors provide an overview of widely-used HRV time-domain, frequency-domain, and non-linear metrics. Time-domain indices quantify the amount of HRV observed during monitoring periods that may range from ~2 min to 24 h. Frequency-domain values calculate the absolute or relative amount of signal energy within component bands. Non-linear measurements quantify the unpredictability and complexity of a series of IBIs. The authors survey published normative values for clinical, healthy, and optimal performance populations. They stress the importance of measurement context, including recording period length, subject age, and sex, on baseline HRV values. They caution that 24 h, short-term, and ultra-short-term normative values are not interchangeable. They encourage professionals to supplement published norms with findings from their own specialized populations. Finally, the authors provide an overview of HRV assessment strategies for clinical and optimal performance interventions.",
"title": ""
},
{
"docid": "a0e4c16bf0a696be510c232360c33267",
"text": "We consider the task of fine-grained sentiment analysis from the perspective of multiple instance learning (MIL). Our neural model is trained on document sentiment labels, and learns to predict the sentiment of text segments, i.e. sentences or elementary discourse units (EDUs), without segment-level supervision. We introduce an attention-based polarity scoring method for identifying positive and negative text snippets and a new dataset which we call SpoT (as shorthand for Segment-level POlariTy annotations) for evaluating MIL-style sentiment models like ours. Experimental results demonstrate superior performance against multiple baselines, whereas a judgement elicitation study shows that EDU-level opinion extraction produces more informative summaries than sentence-based alternatives.",
"title": ""
},
{
"docid": "c3261d1552912642d407b512d08cc6f7",
"text": "Four studies apply self-determination theory (SDT; Ryan & Deci, 2000) in investigating motivation for computer game play, and the effects of game play on wellbeing. Studies 1–3 examine individuals playing 1, 2 and 4 games, respectively and show that perceived in-game autonomy and competence are associated with game enjoyment, preferences, and changes in well-being preto post-play. Competence and autonomy perceptions are also related to the intuitive nature of game controls, and the sense of presence or immersion in participants’ game play experiences. Study 4 surveys an on-line community with experience in multiplayer games. Results show that SDT’s theorized needs for autonomy, competence, and relatedness independently predict enjoyment and future game play. The SDT model is also compared with Yee’s (2005) motivation taxonomy of game play motivations. Results are discussed in terms of the relatively unexplored landscape of human motivation within virtual worlds.",
"title": ""
},
{
"docid": "e5701d5cd667f4f4ab9bbad4da1d659a",
"text": "In clustering algorithms, choosing a subset of representative examples is very important in data set. Such ''exemplars \" can be found by randomly choosing an initial subset of data objects and then iteratively refining it, but this works well only if that initial choice is close to a good solution. In this paper, based on the frequency of attribute values, the average density of an object is defined. Furthermore, a novel ini-tialization method for categorical data is proposed, in which the distance between objects and the density of the object is considered. We also apply the proposed initialization method to k-modes algorithm and fuzzy k-modes algorithm. Experimental results illustrate that the proposed initialization method is superior to random initialization method and can be applied to large data sets for its linear time complexity with respect to the number of data objects. Clustering data based on a measure of similarity is a critical step in scientific data analysis and in engineering systems. A common method is to use data to learn a set of centers such that the sum of squared errors between objects and their nearest centers is small (Brendan & Delbert, 2007). At present, the popular partition clustering technique usually begins with an initial set of randomly selected exemplars and iteratively refines this set so as to decrease the sum of squared errors. Due to the simpleness, random initiali-zation method has been widely used. However, these clustering algorithms need to be rerun many times with different initializa-tions in an attempt to find a good solution. Furthermore, random initialization method works well only when the number of clusters is small and chances are good that at least one random initializa-tion is close to a good solution. Therefore, how to choose initial cluster centers is extremely important as they have a direct impact on the formation of final clusters. Based on the difference in data type, selection of initial cluster centers mainly can be classified into numeric data and categorical data. Aiming at numeric data, several attempts have been reported to solve the cluster initialization to date, few researches are concerned with initialization of categorical data. Huang (1998) introduced two initial mode selection methods for k-modes algorithm. The first method selects the first k distinct objects from the data set as the initial k-modes. The second method assigns the most frequent categories equally to the initial k-modes. Though the second method …",
"title": ""
},
{
"docid": "b4e9cfc0dbac4a5d7f76001e73e8973d",
"text": "Style transfer aims to apply the style of an exemplar model to a target one, while retaining the target’s structure. The main challenge in this process is to algorithmically distinguish style from structure, a high-level, potentially ill-posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source-to-target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source-to-target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces.",
"title": ""
},
{
"docid": "cbd8e376ae26ad4f8b253ca4ad3aa94a",
"text": "Social media allow for an unprecedented amount of interactions and information exchange between people online. A fundamental aspect of human social behavior, however, is the tendency of people to associate themselves with like-minded individuals, forming homogeneous social circles both online and offline. In this work, we apply a new model that allows us to distinguish between social ties of varying strength, and to observe evidence of homophily with regards to politics, music, health, residential sector & year in college, within the online and offline social network of 74 college students. We present a multiplex network approach to social tie strength, here applied to mobile communication data calls, text messages, and co-location, allowing us to dimensionally identify relationships by considering the number of channels utilized between students. We find that strong social ties are characterized by maximal use of communication channels, while weak ties by minimal use. We are able to identify 75% of close friendships, 90% of weaker ties, and 90% of Facebook friendships as compared to reported ground truth. We then show that stronger ties exhibit greater profile similarity than weaker ones. Apart from high homogeneity in social circles with respect to political and health aspects, we observe strong homophily driven by music, residential sector and year in college. Despite Facebook friendship being highly dependent on residence and year, exposure to less homogeneous content can be found in the online rather than the offline social circles of students, most notably in political and music aspects.",
"title": ""
},
{
"docid": "cbae0fd956a12827152e38f93e2b3e7a",
"text": "Bare-metal clouds are an emerging infrastructure-as-a-service (IaaS) that leases physical machines (bare-metal instances) rather than virtual machines, allowing resource-intensive applications to have exclusive access to physical hardware. Unfortunately, bare-metal instances require time-consuming or OS-specific tasks for deployment due to the lack of virtualization layers, thereby sacrificing several beneficial features of traditional IaaS clouds such as agility, elasticity, and OS transparency. We present BMcast, an OS deployment system with a special-purpose de-virtualizable virtual machine monitor (VMM) that supports quick and OS-transparent startup of bare-metal instances. BMcast performs streaming OS deployment while allowing direct access to physical hardware from the guest OS, and then disappears after completing the deployment. Quick startup of instances improves agility and elasticity significantly, and OS transparency greatly simplifies management tasks for cloud customers. Experimental results have confirmed that BMcast initiated a bare-metal instance 8.6 times faster than image copying, and database performance on BMcast during streaming OS deployment was comparable to that on a state-of-the-art VMM without performing deployment. BMcast incurred zero overhead after de-virtualization.",
"title": ""
},
{
"docid": "03609a2c69b670d819d35d42ae0ac354",
"text": "A MAJOR problem in the search for new antidepressant drugs is the lack of animal models which both resemble depressive illness and are selectively sensitive to clinically effective antidepressant treatments. We have been working on a new behavioural model in the rat which attempts to meet these two requirements. The method is based on the observation that a rat, when forced to swim in a situation from which there is no escape, will, after an initial period of vigorous activity, eventually cease to move altogether making only those movements necessary to keep its head above water. We think that this characteristic and readily identifiable behavioural immobility indicates a state of despair in which the rat has learned that escape is impossible and resigns itself to the experimental conditions. This hypothesis receives support from results presented below which indicate that immobility is reduced by different treatments known to be therapeutic in depression including three drugs, iprindole, mianserin and viloxazine which although clinically active1–3 show little or no ‘antidepressant’ activity in the usual animal tests4–6.",
"title": ""
},
{
"docid": "8f3eaf1a65cd3d81e718143304e4ce81",
"text": "Issue tracking systems store valuable data for testing hypotheses concerning maintenance, building statistical prediction models and recently investigating developers \"affectiveness\". In particular, the Jira Issue Tracking System is a proprietary tracking system that has gained a tremendous popularity in the last years and offers unique features like the project management system and the Jira agile kanban board. This paper presents a dataset extracted from the Jira ITS of four popular open source ecosystems (as well as the tools and infrastructure used for extraction) the Apache Software Foundation, Spring, JBoss and CodeHaus communities. Our dataset hosts more than 1K projects, containing more than 700K issue reports and more than 2 million issue comments. Using this data, we have been able to deeply study the communication process among developers, and how this aspect affects the development process. Furthermore, comments posted by developers contain not only technical information, but also valuable information about sentiments and emotions. Since sentiment analysis and human aspects in software engineering are gaining more and more importance in the last years, with this repository we would like to encourage further studies in this direction.",
"title": ""
},
{
"docid": "ff418efbdd2381692f01b5cdc94143d5",
"text": "The U.S. legislation at both the federal and state levels mandates certain organizations to inform customers about information uses and disclosures. Such disclosures are typically accomplished through privacy policies, both online and offline. Unfortunately, the policies are not easy to comprehend, and, as a result, online consumers frequently do not read the policies provided at healthcare Web sites. Because these policies are often required by law, they should be clear so that consumers are likely to read them and to ensure that consumers can comprehend these policies. This, in turn, may increase consumer trust and encourage consumers to feel more comfortable when interacting with online organizations. In this paper, we present results of an empirical study, involving 993 Internet users, which compared various ways to present privacy policy information to online consumers. Our findings suggest that users perceive typical, paragraph-form policies to be more secure than other forms of policy representation, yet user comprehension of such paragraph-form policies is poor as compared to other policy representations. The results of this study can help managers create more trustworthy policies, aid compliance officers in detecting deceptive organizations, and serve legislative bodies by providing tangible evidence as to the ineffectiveness of current privacy policies.",
"title": ""
},
{
"docid": "e442f267a189ac2bfc5fb6f9da081606",
"text": "The prevalence of Autism Spectrum Disorder (ASD) is 1 in 110 persons in the U.S. Both parents of children with ASD are under stress that may impact their health-related quality of life (HRQL) (physical and mental health). The purpose of the current study was to explore the relationship of parenting stress, support from family functioning and the HRQL (physical and mental health) of both parents. Female (n = 64) and male (n = 64) parents of children with ASD completed Web-based surveys examining parenting stress, family functioning, and physical and mental health. Results of a Wilcoxon signed-ranks test showed that female parent discrepant (D) scores between \"what is\" and \"should be\" family functioning were significantly larger than male parents, p = .002. Results of stepwise linear regression for the male-female partners showed that (1) higher female caregiving stress was related to lower female physical health (p < .001), (2) a higher discrepancy score in family functioning predicted lower mental health (p < .001), accounting for 31% of the variance for females and (3) male parent personal and family life stress (p < .001) and family functioning discrepant (D) score (p < .001) predicted poor mental health, with the discrepancy score accounting for 35% of the variance. These findings suggest that there may be differences in mothers' and fathers' perceptions and expectations about family functioning and this difference needs to be explored and applied when working with families of children with ASD.",
"title": ""
}
] |
scidocsrr
|
194eb4db59d2578c68acf2278f07f7aa
|
Visualizing Workload and Emotion Data in Air Traffic Control - An Approach Informed by the Supervisors Decision Making Process
|
[
{
"docid": "9089a8cc12ffe163691d81e319ec0f25",
"text": "Complex problem solving (CPS) emerged in the last 30 years in Europe as a new part of the psychology of thinking and problem solving. This paper introduces into the field and provides a personal view. Also, related concepts like macrocognition or operative intelligence will be explained in this context. Two examples for the assessment of CPS, Tailorshop and MicroDYN, are presented to illustrate the concept by means of their measurement devices. Also, the relation of complex cognition and emotion in the CPS context is discussed. The question if CPS requires complex cognition is answered with a tentative “yes.”",
"title": ""
}
] |
[
{
"docid": "993b753e365e6a1956c425c7d0bf1a2a",
"text": "Injection molding is a very complicated process to monitor and control. With its high complexity and many process parameters, the optimization of these systems is a very challenging problem. To meet the requirements and costs demanded by the market, there has been an intense development and research with the aim to maintain the process under control. This paper outlines the latest advances in necessary algorithms for plastic injection process and monitoring, and also a flexible data acquisition system that allows rapid implementation of complex algorithms to assess their correct performance and can be integrated in the quality control process. This is the main topic of this paper. Finally, to demonstrate the performance achieved by this combination, a real case of use is presented. Keywords—Plastic injection, machine learning, rapid complex algorithm prototyping.",
"title": ""
},
{
"docid": "20662e12b45829c00c67434277ab9a26",
"text": "Given the significance of placement in IC physical design, extensive research studies performed over the last 50 years addressed numerous aspects of global and detailed placement. The objectives and the constraints dominant in placement have been revised many times over, and continue to evolve. Additionally, the increasing scale of placement instances affects the algorithms of choice for high-performance tools. We survey the history of placement research, the progress achieved up to now, and outstanding challenges.",
"title": ""
},
{
"docid": "b0747e6cbc20a8e4d9dec0ef75386701",
"text": "The US Vice President, Al Gore, in a speech on the information superhighway, suggested that it could be used to remotely control a nuclear reactor. We do not have enough confidence in computer software, hardware, or networks to attempt this experiment, but have instead built a Internet-accessible, remote-controlled model car that provides a race driver's view via a video camera mounted on the model car. The remote user can see live video from the car, and, using a mouse, control the speed and direction of the car. The challenge was to build a car that could be controlled by novice users in narrow corridors, and that would work not only with the full motion video that the car natively provides, but also with the limited size and frame rate video available over the Internet multicast backbone. We have built a car that has been driven from a site 50 miles away over a 56-kbps IP link using $\\mbox{{\\tt nv}}$ format video at as little as one frame per second and at as low as $100\\times 100$ pixels resolution. We also built hardware to control the car, using a slightly modified voice grade channel videophone. Our experience leads us to believe that it is now possible to put together readily available hardware and software components to build a cheap and effective telepresence.",
"title": ""
},
{
"docid": "dfb68d81ed159e82b6c9f2e930436e97",
"text": "The last decade has seen the fields of molecular biology and genetics transformed by the development of CRISPR-based gene editing technologies. These technologies were derived from bacterial defense systems that protect against viral invasion. Elegant studies focused on the evolutionary battle between CRISPR-encoding bacteria and the viruses that infect and kill them revealed the next step in this arms race, the anti-CRISPR proteins. Investigation of these proteins has provided important new insight into how CRISPR-Cas systems work and how bacterial genomes evolve. They have also led to the development of important biotechnological tools that can be used for genetic engineering, including off switches for CRISPR-Cas9 genome editing in human cells.",
"title": ""
},
{
"docid": "0bc847391ea276e19d91bdb0ab14a5e5",
"text": "Modern machine learning models are beginning to rival human performance on some realistic object recognition tasks, but we still lack a full understanding of how the human brain solves this same problem. This thesis combines knowledge from machine learning and computational neuroscience to create models of human object recognition that are increasingly realistic both in their treatment of low-level neural mechanisms and in their reproduction of high-level human behaviour. First, I present extensions to the Neural Engineering Framework to make its preferred type of model—the “fixed-encoding” network—more accurate for object recognition tasks. These extensions include better distributions—such as Gabor filters—for the encoding weights, and better loss functions—namely weighted squared loss, softmax loss, and hinge loss—to solve for decoding weights. Second, I introduce increased biological realism into deep convolutional neural networks trained with backpropagation, by training them to run using spiking leaky integrate-andfire (LIF) neurons. Convolutional neural networks have been successful in machine learning, and I am able to convert them to spiking networks while retaining similar levels of performance. I present a novel method to smooth the LIF rate response function in order to avoid the common problems associated with differentiating spiking neurons in general and LIF neurons in particular. I also derive a number of novel characterizations of spiking variability, and use these to train spiking networks to be more robust to this variability. Finally, to address the problems with implementing backpropagation in a biological system, I train spiking deep neural networks using the more biological Feedback Alignment algorithm. I examine this algorithm in depth, including many variations on the core algorithm, methods to train using non-differentiable spiking neurons, and some of the limitations of the algorithm. Using these findings, I construct a spiking model that learns online in a biologically realistic manner. The models developed in this thesis help to explain both how spiking neurons in the brain work together to allow us to recognize complex objects, and how the brain may learn this behaviour. Their spiking nature allows them to be implemented on highly efficient neuromorphic hardware, opening the door to object recognition on energy-limited devices such as cell phones and mobile robots.",
"title": ""
},
{
"docid": "4eebd9eb516bf2fe0b89c5d684f1ff96",
"text": "Psychological theories have suggested that creativity involves a twofold process characterized by a generative component facilitating the production of novel ideas and an evaluative component enabling the assessment of their usefulness. The present study employed a novel fMRI paradigm designed to distinguish between these two components at the neural level. Participants designed book cover illustrations while alternating between the generation and evaluation of ideas. The use of an fMRI-compatible drawing tablet allowed for a more natural drawing and creative environment. Creative generation was associated with preferential recruitment of medial temporal lobe regions, while creative evaluation was associated with joint recruitment of executive and default network regions and activation of the rostrolateral prefrontal cortex, insula, and temporopolar cortex. Executive and default regions showed positive functional connectivity throughout task performance. These findings suggest that the medial temporal lobe may be central to the generation of novel ideas and creative evaluation may extend beyond deliberate analytical processes supported by executive brain regions to include more spontaneous affective and visceroceptive evaluative processes supported by default and limbic regions. Thus, creative thinking appears to recruit a unique configuration of neural processes not typically used together during traditional problem solving tasks.",
"title": ""
},
{
"docid": "32817233f5aa05036ca292e7b57143fb",
"text": "Asphalt pavement distresses have significant importance in roads and highways. This paper addresses the detection and localization of one of the key pavement distresses, the potholes using computer vision. Different kinds of pothole and non-pothole images from asphalt pavement are considered for experimentation. Considering the appearance-shape based nature of the potholes, Histograms of oriented gradients (HOG) features are computed for the input images. Features are trained and classified using Naïve Bayes classifier resulting in labeling of the input as pothole or non-pothole image. To locate the pothole in the detected pothole images, normalized graph cut segmentation scheme is employed. Proposed scheme is tested on a dataset having broad range of pavement images. Experimentation results showed 90 % accuracy for the detection of pothole images and high recall for the localization of pothole in the detected images.",
"title": ""
},
{
"docid": "4b453a0f541d1efcd7e24dfc631aaecb",
"text": "Intelligent tutoring systems (ITSs), which provide step-by-step guidance to students in complex problem-solving activities, have been shown to enhance student learning in a range of domains. However, they tend to be difficult to build. Our project investigates whether the process of authoring an ITS can be simplified, while at the same time maintaining the characteristics that make ITS effective, and also maintaining the ability to support large-scale tutor development. Specifically, our project tests whether authoring tools based on programming-by-demonstration techniques (developed in prior research) can support the development of a large-scale, real-world tutor. We are creating an open-access Web site, called Mathtutor (http://webmathtutor.org), where middle school students can solve math problems with step-by-step guidance from ITS. The Mathtutor site fields example-tracing tutors, a novel type of ITS that are built \"by demonstration,\" without programming, using the cognitive tutor authoring tools (CTATs). The project's main contribution will be that it represents a stringent test of large-scale tutor authoring through programming by demonstration. A secondary contribution will be that it tests whether an open-access site (i.e., a site that is widely and freely available) with software tutors for math learning can attract and sustain user interest and learning on a large scale.",
"title": ""
},
{
"docid": "3eaec3c5f9681f131cde6dd72c3ad141",
"text": "This paper proposes a novel acoustic echo suppression (AES) algorithm based on speech presence probability in a frequency domain. Double talk detection algorithm based on two cross-correlation coefficients modeled by Beta distribution controls the update of echo path response to improve the quality of near-end speech. The near-end speech presence probability combined with the Wiener gain function is used to reduce the residual echo. The performance of the proposed algorithm is evaluated by objective tests. High echo-return-loss enhancement and perceptual evaluation of speech quality (PESQ) scores are obtained by comparing with the conventional AES method.",
"title": ""
},
{
"docid": "c04cc8c930b534d57f729d9e53fd283b",
"text": "This paper presents a morphological classification of languages from the IR perspective. Linguistic typology research has shown that the morphological complexity of each language of the world can be described by two variables, index of synthesis and index of fusion. These variables provide a theoretical basis for IR research handling morphological issues. A common theoretical framework is needed in particular due to the increasing significance of cross-language retrieval research and CLIR systems processing different languages. The paper elaborates the linguistic morphological typology for the purposes of IR research. It is studied how the indices of synthesis and fusion could be used as practical tools in monoand cross-lingual IR research. The need for semantic and syntactic typologies is discussed. The paper also reviews studies done in different languages on the effects of morphology and stemming in IR.",
"title": ""
},
{
"docid": "b052e965bd0a28bf52d8faa6f177ed1a",
"text": "Cloud computing requires comprehensive security sol utions based upon many aspects of a large and loosely integrated system. The application software and databases in cloud computing are moved to the centralized large data centers, where the managemen t of the data and services may not be fully trustwo r hy. Threats, vulnerabilities and risks for cloud comput ing are explained, and then, we have designed a clo ud computing security development lifecycle model to a chieve safety and enable the user to take advantage of this technology as much as possible of security and f ce the risks that may be exposed to data. A data integrity checking algorithm; which eliminates the third party auditing, is explained to protect stati c and dynamic data from unauthorized observation, modific ation, or interference. Keyword: Cloud Computing, Cloud Computing Security, Data Integrity, Cloud Threads, Cloud Risks",
"title": ""
},
{
"docid": "413d0b457cc1b96bf65d8a3e1c98ed41",
"text": "Peer-to-peer (P2P) lending is a fast growing financial technology (FinTech) trend that is displacing traditional retail banking. Studies on P2P lending have focused on predicting individual interest rates or default probabilities. However, the relationship between aggregated P2P interest rates and the general economy will be of interest to investors and borrowers as the P2P credit market matures. We show that the variation in P2P interest rates across grade types are determined by three macroeconomic latent factors formed by Canonical Correlation Analysis (CCA) — macro default, investor uncertainty, and the fundamental value of the market. However, the variation in P2P interest rates across term types cannot be explained by the general economy.",
"title": ""
},
{
"docid": "c135c90c9af331d89982dce0b4454a87",
"text": "Suicide attempts often are impulsive, yet little is known about the characteristics of impulsive suicide. We examined impulsive suicide attempts within a population-based, case-control study of nearly lethal suicide attempts among people 13-34 years of age. Attempts were considered impulsive if the respondent reported spending less than 5 minutes between the decision to attempt suicide and the actual attempt. Among the 153 case-subjects, 24% attempted impulsively. Impulsive attempts were more likely among those who had been in a physical fight and less likely among those who were depressed. Relative to control subjects, male sex, fighting, and hopelessness distinguished impulsive cases but depression did not. Our findings suggest that inadequate control of aggressive impulses might be a greater indicator of risk for impulsive suicide attempts than depression.",
"title": ""
},
{
"docid": "8b7cc94a7284d4380537418ed9ee0f01",
"text": "The subject matter of this research; employee motivation and performance seeks to look at how best employees can be motivated in order to achieve high performance within a company or organization. Managers and entrepreneurs must ensure that companies or organizations have a competent personnel that is capable to handle this task. This takes us to the problem question of this research “why is not a sufficient motivation for high performance?” This therefore establishes the fact that money is for high performance but there is need to look at other aspects of motivation which is not necessarily money. Four theories were taken into consideration to give an explanation to the question raised in the problem formulation. These theories include: Maslow’s hierarchy of needs, Herzberg two factor theory, John Adair fifty-fifty theory and Vroom’s expectancy theory. Furthermore, the performance management process as a tool to measure employee performance and company performance. This research equally looked at the various reward systems which could be used by a company. In addition to the above, culture and organizational culture and it influence on employee behaviour within a company was also examined. An empirical study was done at Ultimate Companion Limited which represents the case study of this research work. Interviews and questionnaires were conducted to sample employee and management view on motivation and how it can increase performance at the company. Finally, a comparison of findings with theories, a discussion which raises critical issues on motivation/performance and conclusion constitute the last part of the research. Subject headings, (keywords) Motivation, Performance, Intrinsic, Extrinsic, Incentive, Tangible and Intangible, Reward",
"title": ""
},
{
"docid": "4a5abe07b93938e7549df068967731fc",
"text": "A novel compact dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The proposed miniaturization method consist in transforming the electrical filled square dipoles into vertical folded square loops. The surface of the radiating element is reduced to 0.23λ0∗0.23λ0, where λ0 is the wavelength at the lowest operation frequency for a standing wave ratio (SWR) <2.5, which corresponds to a reduction factor of 48%. The antenna has been prototyped using 3D printing technology. The measured input impedance bandwidth is 51.2% from 1.7 GHz to 2.9 GHz with a Standing wave ratio (SWR) <2.",
"title": ""
},
{
"docid": "7a5fb7d551d412fd8bdbc3183dafc234",
"text": "Presentations have been an effective means of delivering information to groups for ages. Over the past few decades, technological advancements have revolutionized the way humans deliver presentations. Despite that, the quality of presentations can be varied and affected by a variety of reasons. Conventional presentation evaluation usually requires painstaking manual analysis by experts. Although the expert feedback can definitely assist users in improving their presentation skills, manual evaluation suffers from high cost and is often not accessible to most people. In this work, we propose a novel multi-sensor self-quantification framework for presentations. Utilizing conventional ambient sensors (i.e., static cameras, Kinect sensor) and the emerging wearable egocentric sensors (i.e., Google Glass), we first analyze the efficacy of each type of sensor with various nonverbal assessment rubrics, which is followed by our proposed multi-sensor presentation analytics framework. The proposed framework is evaluated on a new presentation dataset, namely NUS Multi-Sensor Presentation (NUSMSP) dataset, which consists of 51 presentations covering a diverse set of topics. The dataset was recorded with ambient static cameras, Kinect sensor, and Google Glass. In addition to multi-sensor analytics, we have conducted a user study with the speakers to verify the effectiveness of our system generated analytics, which has received positive and promising feedback.",
"title": ""
},
{
"docid": "1d98b5bd0c7178b39b7da0e0f9586615",
"text": "TDMA has been proposed as a MAC protocol for wireless sensor networks (WSNs) due to its efficiency in high WSN load. However, TDMA is plagued with shortcomings; we present modifications to TDMA that will allow for the same efficiency of TDMA, while allowing the network to conserve energy during times of low load (when there is no activity being detected). Recognizing that aggregation plays an essential role in WSNs, TDMA-ASAP adds to TDMA: (a) transmission parallelism based on a level-by-level localized graph-coloring, (b) appropriate sleeping between transmissions (\"napping\"), (c) judicious and controlled TDMA slot stealing to avoid empty slots to be unused and (d) intelligent scheduling/ordering transmissions. Our results show that TDMA-ASAP's unique combination of TDMA, slot-stealing, napping, and message aggregation significantly outperforms other hybrid WSN MAC algorithms and has a performance that is close to optimal in terms of energy consumption and overall delay.",
"title": ""
},
{
"docid": "ab1b9e358d10fc091e8c7eedf4674a8a",
"text": "An effective and efficient defect inspection system for TFT-LCD polarised films using adaptive thresholds and shape-based image analyses Chung-Ho Noha; Seok-Lyong Leea; Deok-Hwan Kimb; Chin-Wan Chungc; Sang-Hee Kimd a School of Industrial and Management Engineering, Hankuk University of Foreign Studies, Yonginshi, Korea b School of Electronics Engineering, Inha University, Yonghyun-dong, Incheon-shi, Korea c Division of Computer Science, KAIST, Daejeon-shi, Korea d Key Technology Research Center, Agency for Defense Development, Daejeon-shi, Korea",
"title": ""
},
{
"docid": "d71af4267f6e54288ecff049748bcd7d",
"text": "Background: The purpose of this study was to investigate the effect of a combined visual efficiency and perceptual-motor training programme on the handwriting performance of Chinese children aged 6 to 9 years with handwriting difficulties (HWD). Methods: Twenty-six children with HWD were assigned randomly and equally into two groups. The training programme was provided over eight consecutive weeks with one session per week. The perceptual-motor group received training only on perceptual-motor functions, including visual spatial relationship, visual sequential memory, visual constancy, visual closure, graphomotor control and grip control. The combined training group received additional training components on visual efficiency, including accommodation, ocular motility, and binocular fusion. Visual efficiency, visual perceptual skills, and Chinese handwriting performance were assessed before and after the training programme. Results: The results showed statistically significant improvement in handwriting speed after the training in both groups. However, the combined training gave no additional benefit on improving handwriting speed (ANCOVA: F=0.43, p=0.52). In terms of visual efficiency, participants in the combined training group showed greater improvement in amplitude of accommodation measured with right eye (F=4.34, p<0.05), left eye (F=5.77, p<0.05) and both eyes (F=11.08, p<0.01). Conclusions: Although the additional visual efficiency training did not provide further improvement in the handwriting speed of children with HWD, children showed improvement in their accommodation amplitude. As accommodative function is important for providing sustainable and clear near vision in the process of reading and word recognition for writing, the effect of the combined training on handwriting performance should be further investigated.",
"title": ""
},
{
"docid": "d878e4bb4b17901a36c2cf7235c4568f",
"text": "Cloud computing is the future generation of computational services delivered over the Internet. As cloud infrastructure expands, resource management in such a large heterogeneous and distributed environment is a challenging task. In a cloud environment, uncertainty and dispersion of resources encounters problems of allocation of resources. Unfortunately, existing resource management techniques, frameworks and mechanisms are insufficient to handle these environments, applications and resource behaviors. To provide an efficient performance and to execute workloads, there is a need of quality of service (QoS) based autonomic resource management approach which manages resources automatically and provides reliable, secure and cost efficient cloud services. In this paper, we present an intelligent QoS-aware autonomic resource management approach named as CHOPPER (Configuring, Healing, Optimizing and Protecting Policy for Efficient Resource management). CHOPPER offers self-configuration of applications and resources, self-healing by handling sudden failures, self-protection against security attacks and self-optimization for maximum resource utilization. We have evaluated the performance of the proposed approach in a real cloud environment and the experimental results show that the proposed approach performs better in terms of cost, execution time, SLA violation, resource contention and also provides security against attacks.",
"title": ""
}
] |
scidocsrr
|
37eefbb5ecbd5dea4f08b1eb61577077
|
Occluded Imaging with Time-of-Flight Sensors
|
[
{
"docid": "1b6ddffacc50ad0f7e07675cfe12c282",
"text": "Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.",
"title": ""
}
] |
[
{
"docid": "e59136e0d0a710643a078b58075bd8cd",
"text": "PURPOSE\nEpidemiological evidence suggests that chronic consumption of fruit-based flavonoids is associated with cognitive benefits; however, the acute effects of flavonoid-rich (FR) drinks on cognitive function in the immediate postprandial period require examination. The objective was to investigate whether consumption of FR orange juice is associated with acute cognitive benefits over 6 h in healthy middle-aged adults.\n\n\nMETHODS\nMales aged 30-65 consumed a 240-ml FR orange juice (272 mg) and a calorie-matched placebo in a randomized, double-blind, counterbalanced order on 2 days separated by a 2-week washout. Cognitive function and subjective mood were assessed at baseline (prior to drink consumption) and 2 and 6 h post consumption. The cognitive battery included eight individual cognitive tests. A standardized breakfast was consumed prior to the baseline measures, and a standardized lunch was consumed 3 h post-drink consumption.\n\n\nRESULTS\nChange from baseline analysis revealed that performance on tests of executive function and psychomotor speed was significantly better following the FR drink compared to the placebo. The effects of objective cognitive function were supported by significant benefits for subjective alertness following the FR drink relative to the placebo.\n\n\nCONCLUSIONS\nThese data demonstrate that consumption of FR orange juice can acutely enhance objective and subjective cognition over the course of 6 h in healthy middle-aged adults.",
"title": ""
},
{
"docid": "106086a4b63a5bfe0554f36c9feff5f5",
"text": "It seems uncontroversial that providing feedback after a test, in the form of the correct answer, enhances learning. In real-world educational situations, however, the time available for learning is often constrained-and feedback takes time. We report an experiment in which total time for learning was fixed, thereby creating a trade-off between spending time receiving feedback and spending time on other learning activities. Our results suggest that providing feedback is not universally beneficial. Indeed, under some circumstances, taking time to provide feedback can have a negative net effect on learning. We also found that learners appear to have some insight about the costs of feedback; when they were allowed to control feedback, they often skipped unnecessary feedback in favor of additional retrieval attempts, and they benefited from doing so. These results underscore the importance of considering the costs and benefits of interventions designed to enhance learning.",
"title": ""
},
{
"docid": "541055772a5c2bed70649d2ca9a6c584",
"text": "This report discusses methods for forecasting hourly loads of a US utility as part of the load forecasting track of the Global Energy Forecasting Competition 2012 hosted on Kaggle. The methods described (gradient boosting machines and Gaussian processes) are generic machine learning / regression algorithms and few domain specific adjustments were made. Despite this, the algorithms were able to produce highly competitive predictions and hopefully they can inspire more refined techniques to compete with state-of-the-art load forecasting methodologies.",
"title": ""
},
{
"docid": "3e570e415690daf143ea30a8554b0ac8",
"text": "Innovative technology on intelligent processes for smart home applications that utilize Internet of Things (IoT) is mainly limited and dispersed. The available trends and gaps were investigated in this study to provide valued visions for technical environments and researchers. Thus, a survey was conducted to create a coherent taxonomy on the research landscape. An extensive search was conducted for articles on (a) smart homes, (b) IoT and (c) applications. Three databases, namely, IEEE Explore, ScienceDirect and Web of Science, were used in the article search. These databases comprised comprehensive literature that concentrate on IoT-based smart home applications. Subsequently, filtering process was achieved on the basis of intelligent processes. The final classification scheme outcome of the dataset contained 40 articles that were classified into four classes. The first class includes the knowledge engineering process that examines data representation to identify the means of accomplishing a task for IoT applications and their utilisation in smart homes. The second class includes papers on the detection process that uses artificial intelligence (AI) techniques to capture the possible changes in IoT-based smart home applications. The third class comprises the analytical process that refers to the use of AI techniques to understand the underlying problems in smart homes by inferring new knowledge and suggesting appropriate solutions for the problem. The fourth class comprises the control process that describes the process of measuring and instructing the performance of IoT-based smart home applications against the specifications with the involvement of intelligent techniques. The basic features of this evolving approach were then identified in the aspects of motivation of intelligent process utilisation for IoT-based smart home applications and open-issue restriction utilisation. The recommendations for the approval and utilisation of intelligent process for IoT-based smart home applications were also determined from the literature.",
"title": ""
},
{
"docid": "6cb7cded3c10f00228ac58ff3b82d45e",
"text": "This paper presents a hierarchical control framework for the obstacle avoidance of autonomous and semi-autonomous ground vehicles. The high-level planner is based on motion primitives created from a four-wheel nonlinear dynamic model. Parameterized clothoids and drifting maneuvers are used to improve vehicle agility. The low-level tracks the planned trajectory with a nonlinear Model Predictive Controller. The first part of the paper describes the proposed control architecture and methodology. The second part presents simulative and experimental results with an autonomous and semi-autonomous ground vehicle traveling at high speed on an icy surface.",
"title": ""
},
{
"docid": "cbc04fde0873e0aff630388ee63b53bd",
"text": "Recent works in speech recognition rely either on connectionist temporal classification (CTC) or sequence-to-sequence models for character-level recognition. CTC assumes conditional independence of individual characters, whereas attention-based models can provide nonsequential alignments. Therefore, we could use a CTC loss in combination with an attention-based model in order to force monotonic alignments and at the same time get rid of the conditional independence assumption. In this paper, we use the recently proposed hybrid CTC/attention architecture for audio-visual recognition of speech in-the-wild. To the best of our knowledge, this is the first time that such a hybrid architecture architecture is used for audio-visual recognition of speech. We use the LRS2 database and show that the proposed audio-visual model leads to an 1.3% absolute decrease in word error rate over the audio-only model and achieves the new state-of-the-art performance on LRS2 database (7% word error rate). We also observe that the audio-visual model significantly outperforms the audio-based model (up to 32.9% absolute improvement in word error rate) for several different types of noise as the signal-to-noise ratio decreases.",
"title": ""
},
{
"docid": "59a1088003576f2e75cdbedc24ae8bdf",
"text": "Two literatures or sets of articles are complementary if, considered together, they can reveal useful information of scientik interest not apparent in either of the two sets alone. Of particular interest are complementary literatures that are also mutually isolated and noninteractive (they do not cite each other and are not co-cited). In that case, the intriguing possibility akrae that thm &tfnrmnt;nn n&wd hv mwnhXno them 4. nnvnl Lyww u-c “‘1 YLL”I&.L.sU”4L 6uy’“s. u, b..S..“Y.Ayj .a.-** Y ..u. -... During the past decade, we have identified seven examples of complementary noninteractive structures in the biomedical literature. Each structure led to a novel, plausible, and testable hypothesis that, in several cases, was subsequently corroborated by medical researchers through clinical or laboratory investigation. We have also developed, tested, and described a systematic, computer-sided approach to iinding and identifying complementary noninteractive literatures. Specialization, Fragmentation, and a Connection Explosion By some obscure spontaneous process scientists have responded to the growth of science by organizing their work into soecialties, thus permitting each individual to -r-~ focus on a small part of the total literature. Specialties that grow too large tend to divide into subspecialties that have their own literatures which, by a process of repeated splitting, maintain more or less fixed and manageable size. As the total literature grows, the number of specialties, but not in general the size of each, increases (Kochen, 1963; Swanson, 199Oc). But the unintended consequence of specialization is fragmentation. By dividing up the pie, the potential relationships among its pieces tend to be neglected. Although scientific literature cannot, in the long run, grow disproportionately to the growth of the communities and resources that produce it, combinations of implicitlyrelated segments of literature can grow much faster than the literature itself and can readily exceed the capacity of the community to identify and assimilate such relatedness (Swanson, 1993). The signilicance of the “information explosion” thus may lie not in an explosion of quantity per se, but in an incalculably greater combinatorial explosion of unnoticed and unintended logical connections. The Significance of Complementary Noninteractive Literatures If two literatures each of substantial size are linked by arguments that they respectively put forward -that is, are “logically” related, or complementary -one would expect to gain usefui information by combining them. For example, suppose that one (biomedical) literature establishes that some environmental factor A influences certain internal physiological conditions and a second literature establishes that these same physiological changes influence the course of disease C. Presumably, then, anyone who reads both literatures could conclude that factor A might influence disease C. Under such --->!L---f -----l-----ry-?r-. ----.---,a ?1-_----_I rl-conamons or comptementdnty one woum dtso expect me two literatures to refer to each other. If, however, the two literatures were developed independently of one another, the logical l inkage illustrated may be both unintended and unnoticed. To detect such mutual isolation, we examine the citation pattern. If two literatures are “noninteractive” that ir if thmv hnvm n~.rer fnr odAnm\\ kppn &ml = ulyc 1U) a. “W, na6L.V ..Y.“. ,“a vva&“..n] “W.. UluIu together, and if neither cites the other, then it is possible that scientists have not previously considered both iiteratures together, and so it is possible that no one is aware of the implicit A-C connection. The two conditions, complementarily and noninteraction, describe a model structure that shows how useful information can remain undiscovered even though its components consist of public knowledge (Swanson, 1987,199l). Public Knowledge / Private Knowledge There is, of course, no way to know in any particular case whether the possibility of an AC relationship in the above model has or has not occurred to someone, or whether or not anyone has actually considered the two literatures on A and C together, a private matter that necessarily remains conjectural. However, our argument is based only on determining whether there is any printed evidence to the contrary. We are concerned with public rather than Data Mining: Integration Q Application 295 From: KDD-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. private knowledge -with the state of the record produced rather than the state of mind of the producers (Swanson, 1990d). The point of bringing together the AB and BC literatures, in any event, is not to \"prove\" an AC linkage, (by considering only transitive relationships) but rather call attention to an apparently unnoticed association that may be worth investigating. In principle any chain of scientific, including analogic, reasoning in which different links appear in noninteractive literatures may lead to the discovery of new interesting connections. \"What people know\" is a common u derstanding of what is meant by \"knowledge\". If taken in this subjective sense, the idea of \"knowledge discovery\" could mean merely that someone discovered something they hadn’t known before. Our focus in the present paper is on a second sense of the word \"knowledge\", a meaning associated with the products of human i tellectual activity, as encoded in the public record, rather than with the contents of the human mind. This abstract world of human-created \"objective\" knowledge is open to exploration and discovery, for it can contain territory that is subjectively unknown to anyone (Popper, 1972). Our work is directed toward the discovery of scientificallyuseful information implicit in the public record, but not previously made xplicit. The problem we address concerns structures within the scientific literature, not within the mind. The Process of Finding Complementary Noninteractive Literatures During the past ten years, we have pursued three goals: i) to show in principle how new knowledge might be gained by synthesizing logicallyrelated noninteractive literatures; ii) to demonstrate that such structures do exist, at least within the biomedical literature; and iii) to develop a systematic process for finding them. In pursuit of goal iii, we have created interactive software and database arch strategies that can facilitate the discovery of complementary st uctures in the published literature of science. The universe or searchspace under consideration is limited only by the coverage of the major scientific databases, though we have focused primarily on the biomedical field and the MEDLINE database (8 million records). In 1991, a systematic approach to finding complementary structures was outlined and became a point of departure for software development (Swanson, 1991). The system that has now taken shape is based on a 3-way interaction between computer software, bibliographic databases, and a human operator. Tae interaction generates information structtues that are used heuristically to guide the search for promising complementary literatures. The user of the system begins by choosing a question 296 Technology Spotlight or problem area of scientific interest that can be associated with a literature, C. Elsewhere we describe and evaluate experimental computer software, which we call ARROWSMITH (Swanson & Smalheiser, 1997), that performs two separate functions that can be used independently. The first function produces a list of candidates for a second literature, A, complementary o C, from which the user can select one candidate (at a time) an input, along with C, to the second function. This first function can be considered as a computer-assisted process of problem-discovery, an issue identified in the AI literature (Langley, et al., 1987; p304-307). Alternatively, the user may wish to identify a second literature, A, as a conjecture or hypothesis generated independently of the computer-produced list of candidates. Our approach has been based on the use of article titles as a guide to identifying complementary literatures. As indicated above, our point of departure for the second function is a tentative scientific hypothesis associated with two literalxtres, A and C. A title-word search of MEDLINE is used to create two local computer title-files associated with A and C, respectively. These files are used as input to the ARROWSMITH software, which then produces a list of all words common to the two sets of titles, except for words excluded by an extensive stoplist (presently about 5000 words). The resulting list of words provides the basis for identifying title-word pathways that might provide clues to the presence of complementary arguments within the literatures corresponding to A and C. The output of this procedure is a structured titledisplay (plus journal citation), that serves as a heuristic aid to identifying word-linked titles and serves also as an organized guide to the literature.",
"title": ""
},
{
"docid": "f5ac489e8e387321abd9d3839d7d8ba2",
"text": "Online social networks like Slashdot bring valuable information to millions of users - but their accuracy is based on the integrity of their user base. Unfortunately, there are many “trolls” on Slashdot who post misinformation and compromise system integrity. In this paper, we develop a general algorithm called TIA (short for Troll Identification Algorithm) to classify users of an online “signed” social network as malicious (e.g. trolls on Slashdot) or benign (i.e. normal honest users). Though applicable to many signed social networks, TIA has been tested on troll detection on Slashdot Zoo under a wide variety of parameter settings. Its running time is faster than many past algorithms and it is significantly more accurate than existing methods.",
"title": ""
},
{
"docid": "878bdefc419be3da8d9e18111d26a74f",
"text": "PURPOSE\nTo estimate prevalence and chronicity of insomnia and the impact of chronic insomnia on health and functioning of adolescents.\n\n\nMETHODS\nData were collected from 4175 youths 11-17 at baseline and 3134 a year later sampled from managed care groups in a large metropolitan area. Insomnia was assessed by youth-reported DSM-IV symptom criteria. Outcomes are three measures of somatic health, three measures of mental health, two measures of substance use, three measures of interpersonal problems, and three of daily activities.\n\n\nRESULTS\nOver one-fourth reported one or more symptoms of insomnia at baseline and about 5% met diagnostic criteria for insomnia. Almost 46% of those who reported one or more symptoms of insomnia in Wave 1 continued to be cases at Wave 2 and 24% met DSM-IV symptom criteria for chronic insomnia (cases in Wave 1 were also cases in Wave 2). Multivariate analyses found chronic insomnia increased subsequent risk for somatic health problems, interpersonal problems, psychological problems, and daily activities. Significant odds (p < .05) ranged from 1.6 to 5.6 for poor outcomes. These results are the first reported on chronic insomnia among youths, and corroborate, using prospective data, previous findings on correlates of disturbed sleep based on cross-sectional studies.\n\n\nCONCLUSIONS\nInsomnia is both common and chronic among adolescents. The data indicate that the burden of insomnia is comparable to that of other psychiatric disorders such as mood, anxiety, disruptive, and substance use disorders. Chronic insomnia severely impacts future health and functioning of youths. Those with chronic insomnia are more likely to seek medical care. These data suggest primary care settings might provide a venue for screening and early intervention for adolescent insomnia.",
"title": ""
},
{
"docid": "c1ddefd126c6d338c4cd9238e9067435",
"text": "Tensor networks are efficient representations of high-dimensional tensors which have been very successful for physics and mathematics applications. We demonstrate how algorithms for optimizing such networks can be adapted to supervised learning tasks by using matrix product states (tensor trains) to parameterize models for classifying images. For the MNIST data set we obtain less than 1% test set classification error. We discuss how the tensor network form imparts additional structure to the learned model and suggest a possible generative interpretation.",
"title": ""
},
{
"docid": "cbdbe103bcc85f76f9e6ac09eed8ea4c",
"text": "Using the evidence collection and analysis methodology for Android devices proposed by Martini, Do and Choo (2015), we examined and analyzed seven popular Android cloud-based apps. Firstly, we analyzed each app in order to see what information could be obtained from their private app storage and SD card directories. We collated the information and used it to aid our investigation of each app’s database files and AccountManager data. To complete our understanding of the forensic artefacts stored by apps we analyzed, we performed further analysis on the apps to determine if the user’s authentication credentials could be collected for each app based on the information gained in the initial analysis stages. The contributions of this research include a detailed description of artefacts, which are of general forensic interest, for each app analyzed.",
"title": ""
},
{
"docid": "9f7bb80631e6aa2b13d0045580af15d1",
"text": "This paper presents an extensive study of the software implementation on workstations of the NIST-recommended elliptic curves over prime fields. We present the results of our implementation in C and assembler on a Pentium II 400 MHz workstation. We also provide a comparison with the NIST-recommended curves over binary fields.",
"title": ""
},
{
"docid": "d37d6139ced4c85ff0cbc4cce018212b",
"text": "We describe isone, a tool that facilitates the visual exploration of social networks. Social network analysis is a methodological approach in the social sciences using graph-theoretic concepts to describe, understand and explain social structure. The isone software is an attempt to integrate analysis and visualization of social networks and is intended to be used in research and teaching. While we are primarily focussing on users in the social sciences, several features provided in the tool will be useful in other fields as well. In contrast to more conventional mathematical software in the social sciences that aim at providing a comprehensive suite of analytical options, our emphasis is on complementing every option we provide with tailored means of graphical interaction. We attempt to make complicated types of analysis and data handling transparent, intuitive, and more readily accessible. User feedback indicates that many who usually regard data exploration and analysis complicated and unnerving enjoy the playful nature of visual interaction. Consequently, much of the tool is about graph drawing methods specifically adapted to facilitate visual data exploration. The origins of isone lie in an interdisciplinary cooperation with researchers from political science which resulted in innovative uses of graph drawing methods for social network visualization, and prototypical implementations thereof. With the growing demand for access to these methods, we started implementing an integrated tool for public use. It should be stressed, however, that isone remains a research platform and testbed for innovative methods, and is not intended to become",
"title": ""
},
{
"docid": "afe2bc204458117fb278ef500b485ea1",
"text": "PURPOSE\nTitanium based implant systems, though considered as the gold standard for rehabilitation of edentulous spaces, have been criticized for many inherent flaws. The onset of hypersensitivity reactions, biocompatibility issues, and an unaesthetic gray hue have raised demands for more aesthetic and tissue compatible material for implant fabrication. Zirconia is emerging as a promising alternative to conventional Titanium based implant systems for oral rehabilitation with superior biological, aesthetics, mechanical and optical properties. This review aims to critically analyze and review the credibility of Zirconia implants as an alternative to Titanium for prosthetic rehabilitation.\n\n\nSTUDY SELECTION\nThe literature search for articles written in the English language in PubMed and Cochrane Library database from 1990 till December 2016. The following search terms were utilized for data search: \"zirconia implants\" NOT \"abutment\", \"zirconia implants\" AND \"titanium implants\" AND \"osseointegration\", \"zirconia implants\" AND compatibility.\n\n\nRESULTS\nThe number of potential relevant articles selected were 47. All the human in vivo clinical, in vitro, animals' studies were included and discussed under the following subheadings: Chemical composition, structure and phases; Physical and mechanical properties; Aesthetic and optical properties; Osseointegration and biocompatibility; Surface modifications; Peri-implant tissue compatibility, inflammation and soft tissue healing, and long-term prognosis.\n\n\nCONCLUSIONS\nZirconia implants are a promising alternative to titanium with a superior soft-tissue response, biocompatibility, and aesthetics with comparable osseointegration. However, further long-term longitudinal and comparative clinical trials are required to validate zirconia as a viable alternative to the titanium implant.",
"title": ""
},
{
"docid": "dfc0f23dbb0a0556f53f5a913b936c8f",
"text": "Neural network-based methods represent the state-of-the-art in question generation from text. Existing work focuses on generating only questions from text without concerning itself with answer generation. Moreover, our analysis shows that handling rare words and generating the most appropriate question given a candidate answer are still challenges facing existing approaches. We present a novel two-stage process to generate question-answer pairs from the text. For the first stage, we present alternatives for encoding the span of the pivotal answer in the sentence using Pointer Networks. In our second stage, we employ sequence to sequence models for question generation, enhanced with rich linguistic features. Finally, global attention and answer encoding are used for generating the question most relevant to the answer. We motivate and linguistically analyze the role of each component in our framework and consider compositions of these. This analysis is supported by extensive experimental evaluations. Using standard evaluation metrics as well as human evaluations, our experimental results validate the significant improvement in the quality of questions generated by our framework over the state-of-the-art. The technique presented here represents another step towards more automated reading comprehension assessment. We also present a live system to demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "18233af1857390bff51d2e713bc766d9",
"text": "Name disambiguation is a perennial challenge for any large and growing dataset but is particularly significant for scientific publication data where documents and ideas are linked through citations and depend on highly accurate authorship. Differentiating personal names in scientific publications is a substantial problem as many names are not sufficiently distinct due to the large number of researchers active in most academic disciplines today. As more and more documents and citations are published every year, any system built on this data must be continually retrained and reclassified to remain relevant and helpful. Recently, some incremental learning solutions have been proposed, but most of these have been limited to small-scale simulations and do not exhibit the full heterogeneity of the millions of authors and papers in real world data. In our work, we propose a probabilistic model that simultaneously uses a rich set of metadata and reduces the amount of pairwise comparisons needed for new articles. We suggest an approach to disambiguation that classifies in an incremental fashion to alleviate the need for retraining the model and re-clustering all papers and uses fewer parameters than other algorithms. Using a published dataset, we obtained the highest K-measure which is a geometric mean of cluster and author-class purity. Moreover, on a difficult author block from the Clarivate Analytics Web of Science, we obtain higher precision than other algorithms.",
"title": ""
},
{
"docid": "e0ef97db18a47ba02756ba97830a0d0c",
"text": "This article reviews the literature concerning the introduction of interactive whiteboards (IWBs) in educational settings. It identifies common themes to emerge from a burgeoning and diverse literature, which includes reports and summaries available on the Internet. Although the literature reviewed is overwhelmingly positive about the impact and the potential of IWBs, it is primarily based on the views of teachers and pupils. There is insufficient evidence to identify the actual impact of such technologies upon learning either in terms of classroom interaction or upon attainment and achievement. This article examines this issue in light of varying conceptions of interactivity and research into the effects of learning with verbal and visual information.",
"title": ""
},
{
"docid": "a12538c128f7cd49f2561170f6aaf0ac",
"text": "We also define qp(kp) = 0, k ∈ Z. Fermat quotients appear and play a major role in various questions of computational and algebraic number theory and thus their distribution modulo p has been studied in a number of works; see, for example, [1, 5, 6, 7, 8, 9, 10, 11, 13, 15, 16, 17, 18] and the references therein. In particular, the image set Ip(U) = {qp(u) : 1 ≤ u ≤ U} has been investigated in some of these works. Let Ip(U) = #Ip(U) be the cardinality of Ip(U). It is well known (see, for example, [6, Section 2]) that",
"title": ""
},
{
"docid": "3bbf4bd1daaf0f6f916268907410b88f",
"text": "UNLABELLED\nNoncarious cervical lesions are highly prevalent and may have different etiologies. Regardless of their origin, be it acid erosion, abrasion, or abfraction, restoring these lesions can pose clinical challenges, including access to the lesion, field control, material placement and handling, marginal finishing, patient discomfort, and chair time. This paper describes a novel technique for minimizing these challenges and optimizing the restoration of noncarious cervical lesions using a technique the author describes as the class V direct-indirect restoration. With this technique, clinicians can create precise extraoral margin finishing and polishing, while maintaining periodontal health and controlling polymerization shrinkage stress.\n\n\nCLINICAL SIGNIFICANCE\nThe clinical technique described in this article has the potential for being used routinely in treating noncarious cervical lesions, especially in cases without easy access and limited field control. Precise margin finishing and polishing is one of the greatest benefits of the class V direct-indirect approach, as the author has seen it work successfully in his practice over the past five years.",
"title": ""
}
] |
scidocsrr
|
5393c145478fcbcd27ed2558db779cf2
|
Low-profile fully integrated 60 GHz 18 element phased array on multilayer liquid crystal polymer flip chip package
|
[
{
"docid": "c01e3b06294f9e84bcc9d493990c6149",
"text": "An integrated CMOS 60 GHz phased-array antenna module supporting symmetrical 32 TX/RX elements for wireless docking is described. Bidirectional architecture with shared blocks, mm-wave TR switch design with less than 1dB TX loss, and a full built in self test (BIST) circuits with 5deg and +/-1dB measurement accuracy of phase and power are presented. The RFIC size is 29mm2, consuming 1.2W/0.85W at TX and RX with a 29dBm EIRP at -19dB EVM and 10dB NF.",
"title": ""
}
] |
[
{
"docid": "9d7a441731e9d0c62dd452ccb3d19f7b",
"text": " In many countries, especially in under developed and developing countries proper health care service is a major concern. The health centers are far and even the medical personnel are deficient when compared to the requirement of the people. For this reason, health services for people who are unhealthy and need health monitoring on regular basis is like impossible. This makes the health monitoring of healthy people left far more behind. In order for citizens not to be deprived of the primary care it is always desirable to implement some system to solve this issue. The application of Internet of Things (IoT) is wide and has been implemented in various areas like security, intelligent transport system, smart cities, smart factories and health. This paper focuses on the application of IoT in health care system and proposes a novel architecture of making use of an IoT concept under fog computing. The proposed architecture can be used to acknowledge the underlying problem of deficient clinic-centric health system and change it to smart patientcentric health system.",
"title": ""
},
{
"docid": "ddf18acc13f1f04e6b992665fd0104df",
"text": "ABSTRACT The problem of ensuring e cient storage and fast retrieval for multi-version structured documents is important because of the recent popularity of XML documents and semistructured information on the web. Traditional document version control systems, e.g. RCS, which model documents as a sequence of lines of text and use the shortest edit script to represent version di erences are inadequate since they do not preserve the logical structure of the original document. Instead we propose a new approach where the structure of the document is preserved intact, and their sub-objects are timestamped hierarchically for e cient reconstruction of current and past versions. Our technique, called the Usefulness Based Copy Control (UBCC), is geared towards efcient version reconstruction while using small space overhead. Our analysis and experiments illustrate the e ectiveness of the overall approach to version control for structured documents.",
"title": ""
},
{
"docid": "ad4d38ee8089a67353586abad319038f",
"text": "State-of-the-art systems of Chinese Named Entity Recognition (CNER) require large amounts of hand-crafted features and domainspecific knowledge to achieve high performance. In this paper, we apply a bidirectional LSTM-CRF neural network that utilizes both characterlevel and radical-level representations. We are the first to use characterbased BLSTM-CRF neural architecture for CNER. By contrasting the results of different variants of LSTM blocks, we find the most suitable LSTM block for CNER. We are also the first to investigate Chinese radical-level representations in BLSTM-CRF architecture and get better performance without carefully designed features. We evaluate our system on the third SIGHAN Bakeoff MSRA data set for simplfied CNER task and achieve state-of-the-art performance 90.95% F1.",
"title": ""
},
{
"docid": "1b53378e33f24f59eb0486f2978bebee",
"text": "The advances in location-acquisition and mobile computing techniques have generated massive spatial trajectory data, which represent the mobility of a diversity of moving objects, such as people, vehicles, and animals. Many techniques have been proposed for processing, managing, and mining trajectory data in the past decade, fostering a broad range of applications. In this article, we conduct a systematic survey on the major research into trajectory data mining, providing a panorama of the field as well as the scope of its research topics. Following a road map from the derivation of trajectory data, to trajectory data preprocessing, to trajectory data management, and to a variety of mining tasks (such as trajectory pattern mining, outlier detection, and trajectory classification), the survey explores the connections, correlations, and differences among these existing techniques. This survey also introduces the methods that transform trajectories into other data formats, such as graphs, matrices, and tensors, to which more data mining and machine learning techniques can be applied. Finally, some public trajectory datasets are presented. This survey can help shape the field of trajectory data mining, providing a quick understanding of this field to the community.",
"title": ""
},
{
"docid": "b0a1a782ce2cbf5f152a52537a1db63d",
"text": "In piezoelectric energy harvesting (PEH), with the use of the nonlinear technique named synchronized switching harvesting on inductor (SSHI), the harvesting efficiency can be greatly enhanced. Furthermore, the introduction of its self-powered feature makes this technique more applicable for standalone systems. In this article, a modified circuitry and an improved analysis for self-powered SSHI are proposed. With the modified circuitry, direct peak detection and better isolation among different units within the circuit can be achieved, both of which result in further removal on dissipative components. In the improved analysis, details in open circuit voltage, switching phase lag, and voltage inversion factor are discussed, all of which lead to a better understanding to the working principle of the self-powered SSHI. Both analyses and experiments show that, in terms of harvesting power, the higher the excitation level, the closer between self-powered and ideal SSHI; at the same time, the more beneficial the adoption of self-powered SSHI treatment in piezoelectric energy harvesting, compared to the standard energy harvesting (SEH) technique.",
"title": ""
},
{
"docid": "3b638c51c7884b63f541ccd44fbaa86d",
"text": "We tried to develop a real time face detection and recognition system which uses an “appearance-based” approach. For detection purpose we used Viola Jones algorithm. To recognize face we worked with Eigen Faces which is a PCA based algorithm. In a real time to recognize a face we need a data training set. For data training set we took five images of each person and manipulated the Eigen values to match the known individual.",
"title": ""
},
{
"docid": "57c81eb0f559ea1c10747b5ecae14c67",
"text": "OBJECTIVE\nAutism spectrum disorder (ASD) is associated with amplified emotional responses and poor emotional control, but little is known about the underlying mechanisms. This article provides a conceptual and methodologic framework for understanding compromised emotion regulation (ER) in ASD.\n\n\nMETHOD\nAfter defining ER and related constructs, methods to study ER were reviewed with special consideration on how to apply these approaches to ASD. Against the backdrop of cognitive characteristics in ASD and existing ER theories, available research was examined to identify likely contributors to emotional dysregulation in ASD.\n\n\nRESULTS\nLittle is currently known about ER in youth with ASD. Some mechanisms that contribute to poor ER in ASD may be shared with other clinical populations (e.g., physiologic arousal, degree of negative and positive affect, alterations in the amygdala and prefrontal cortex), whereas other mechanisms may be more unique to ASD (e.g., differences in information processing/perception, cognitive factors [e.g., rigidity], less goal-directed behavior and more disorganized emotion in ASD).\n\n\nCONCLUSIONS\nAlthough assignment of concomitant psychiatric diagnoses is warranted in some cases, poor ER may be inherent in ASD and may provide a more parsimonious conceptualization for the many associated socioemotional and behavioral problems in this population. Further study of ER in youth with ASD may identify meaningful subgroups of patients and lead to more effective individualized treatments.",
"title": ""
},
{
"docid": "a6b02e927784468398442d7c4586aa92",
"text": "for concrete cause-effect",
"title": ""
},
{
"docid": "bb5ce42707f086d4ca2c6a5d23587070",
"text": "Supervoxel methods such as Simple Linear Iterative Clustering (SLIC) are an effective technique for partitioning an image or volume into locally similar regions, and are a common building block for the development of detection, segmentation and analysis methods. We introduce maskSLIC an extension of SLIC to create supervoxels within regions-of-interest, and demonstrate, on examples from 2-dimensions to 4-dimensions, that maskSLIC overcomes issues that affect SLIC within an irregular mask. We highlight the benefits of this method through examples, and show that it is able to better represent underlying tumour subregions and achieves significantly better results than SLIC on the BRATS 2013 brain tumour challenge data (p=0.001) – outperforming SLIC on 18/20 scans. Finally, we show an application of this method for the analysis of functional tumour subregions and demonstrate that it is more effective than voxel clustering.",
"title": ""
},
{
"docid": "8199c3589d5697ce2f3e19188e49c1a0",
"text": "The aim of this study was to compare 2 models of resistance training (RT) programs, nonperiodized (NP) training and daily nonlinear periodized (DNLP) training, on strength, power, and flexibility in untrained adolescents. Thirty-eight untrained male adolescents were randomly assigned to 1 of 3 groups: a control group, NP RT program, and DNLP program. The subjects were tested pretraining and after 4, 8, and 12 weeks for 1 repetition maximum (1RM) resistances in the bench press and 45° leg press, sit and reach test, countermovement vertical jump (CMVJ), and standing long jump (SLJ). Both training groups performed the same sequence of exercises 3 times a week for a total of 36 sessions. The NP RT consisted of 3 sets of 10-12RM throughout the training period. The DNLP training consisted of 3 sets using different training intensities for each of the 3 training sessions per week. The total volume of the training programs was not significantly different. Both the NP and DNLP groups exhibited a significant increase in the 1RM for the bench press and 45° leg press posttraining compared with that pretraining, but there were no significant differences between groups (p ≤ 0.05). The DNLP group's 1RM changes showed greater percentage improvements and effect sizes. Training intensity for the bench press and 45° leg press did not significantly change during the training. In the CMVJ and SLJ tests, NP and DNLP training showed no significant change. The DNLP group showed a significant increase in the sit and reach test after 8 and 12 weeks of training compared with pretraining; this did not occur with NP training. In summary, in untrained adolescents during a 12-week training period, a DNLP program can be used to elicit similar and possible superior maximal strength and flexibility gains compared with an NP multiset training model.",
"title": ""
},
{
"docid": "9ff522e9874c924636f9daba90f9881a",
"text": "Time management is required in simulations to ensure temporal aspects of the system under investigation are correctly reproduced by the simulation model. This paper describes the time management services that have been defined in the High Level Architecture. The need for time management services is discussed, as well as design rationales that lead to the current definition of the HLA time management services. These services are described, highlighting information that must flow between federates and the Runtime Infrastructure (RTI) software in order to efficiently implement time management algorithms.",
"title": ""
},
{
"docid": "84307c2dd94ebe89c46a535b31b4b51b",
"text": "Building systems that autonomously create temporal abstractions from data is a key challenge in scaling learning and planning in reinforcement learning. One popular approach for addressing this challenge is the options framework [41]. However, only recently in [1] was a policy gradient theorem derived for online learning of general purpose options in an end to end fashion. In this work, we extend previous work on this topic that only focuses on learning a two-level hierarchy including options and primitive actions to enable learning simultaneously at multiple resolutions in time. We achieve this by considering an arbitrarily deep hierarchy of options where high level temporally extended options are composed of lower level options with finer resolutions in time. We extend results from [1] and derive policy gradient theorems for a deep hierarchy of options. Our proposed hierarchical option-critic architecture is capable of learning internal policies, termination conditions, and hierarchical compositions over options without the need for any intrinsic rewards or subgoals. Our empirical results in both discrete and continuous environments demonstrate the efficiency of our framework.",
"title": ""
},
{
"docid": "d62b3a328257253bcb41bf0fbdeb9242",
"text": "Logging has been a common practice for monitoring and diagnosing performance issues. However, logging comes at a cost, especially for large-scale online service systems. First, the overhead incurred by intensive logging is non-negligible. Second, it is costly to diagnose a performance issue if there are a tremendous amount of redundant logs. Therefore, we believe that it is important to limit the overhead incurred by logging, without sacrificing the logging effectiveness. In this paper we propose Log2, a cost-aware logging mechanism. Given a “budget” (defined as the maximum volume of logs allowed to be output in a time interval), Log2 makes the “whether to log” decision through a two-phase filtering mechanism. In the first phase, a large number of irrelevant logs are discarded efficiently. In the second phase, useful logs are cached and output while complying with logging budget. In this way, Log2 keeps the useful logs and discards the less useful ones. We have implemented Log2 and evaluated it on an open source system as well as a real-world online service system from Microsoft. The experimental results show that Log2 can control logging overhead while preserving logging effectiveness.",
"title": ""
},
{
"docid": "67a8a8ef9111edd9c1fa88e7c59b6063",
"text": "The process of obtaining intravenous (IV) access, Venipuncture, is an everyday invasive procedure in medical settings and there are more than one billion venipuncture related procedures like blood draws, peripheral catheter insertions, intravenous therapies, etc. performed per year [3]. Excessive venipunctures are both time and resource consuming events causing anxiety, pain and distress in patients, or can lead to severe harmful injuries [8]. The major problem faced by the doctors today is difficulty in accessing veins for intra-venous drug delivery & other medical situations [3]. There is a need to develop vein detection devices which can clearly show veins. This project deals with the design development of non-invasive subcutaneous vein detection system and is implemented based on near infrared imaging and interfaced to a laptop to make it portable. A customized CCD camera is used for capturing the vein images and Computer Software modules (MATLAB & LabVIEW) is used for the processing [3].",
"title": ""
},
{
"docid": "2e5d9f5ae2f631357d8a6ef22cd52b62",
"text": "Necrobiosis lipoidica is a rare disorder that usually appears in the lower extremities and it is often related to diabetes mellitus. There are few reported cases of necrobiosis lipoidica in children. We present an interesting case in that the patient developed lesions on the abdomen, which is an unusual location.",
"title": ""
},
{
"docid": "f17bd8e3c7e26fa64fb6d650f2ccb9d6",
"text": "Human-robot collaborative work has the potential to advance quality, efficiency and safety in manufacturing. In this paper we present a gestural communication lexicon for human-robot collaboration in industrial assembly tasks and establish methodology for producing such a lexicon. Our user experiments are grounded in a study of industry needs, providing potential real-world applicability to our results. Actions required for industrial assembly tasks are abstracted into three classes: part acquisition, part manipulation, and part operations. We analyzed the communication between human pairs performing these subtasks and derived a set of communication terms and gestures. We found that participant-provided gestures are intuitive and well suited to robotic implementation, but that interpretation is highly dependent on task context. We then implemented these gestures on a robot arm in a human-robot interaction context, and found the gestures to be easily interpreted by observers. We found that observation of human-human interaction can be effective in determining what should be communicated in a given human-robot task, how communication gestures should be executed, and priorities for robotic system implementation based on frequency of use.",
"title": ""
},
{
"docid": "cd338aee8e141212a1548431766df498",
"text": "In recent years, contextual models that exploit maps have been shown to be very effective for many recognition and localization tasks. In this paper we propose to exploit aerial images in order to enhance freely available world maps. Towards this goal, we make use of OpenStreetMap and formulate the problem as the one of inference in a Markov random field parameterized in terms of the location of the road-segment centerlines as well as their width. This parameterization enables very efficient inference and returns only topologically correct roads. In particular, we can segment all OSM roads in the whole world in a single day using a small cluster of 10 computers. Importantly, our approach generalizes very well, it can be trained using only 1.5 km2 aerial imagery and produce very accurate results in any location across the globe. We demonstrate the effectiveness of our approach outperforming the state-of-the-art in two new benchmarks that we collect. We then show how our enhanced maps are beneficial for semantic segmentation of ground images.",
"title": ""
},
{
"docid": "6b2e9b890029ced15f4f9a00a7a7be81",
"text": "Android belongs to the leading operating systems for mobile devices, e.g. smartphones or tablets. The availability of Android's source code under general public license allows interesting developments and useful modifications of the platform for third parties, like the integration of real-time support. This paper presents an extension of Android improving its real-time capabilities, without loss of original Android functionality and compatibility to existing applications. In our approach we apply the RT_PREEMPT patch to the Linux kernel, modify essential Android components like the Dalvik virtual machine and introduce a new real-time interface for Android developers. The resulting Android system supports applications with real-time requirements, which can be implemented in the same way as non-real-time applications.",
"title": ""
},
{
"docid": "19c3b504c0afc170720e9f0a9180a23b",
"text": "The global telephone network is relied upon by billions every day. Central to its operation is the Signaling System 7 (SS7) protocol, which is used for setting up calls, managing mobility, and facilitating many other network services. This protocol was originally built on the assumption that only a small number of trusted parties would be able to directly communicate with its core infrastructure. As a result, SS7 — as a feature — allows all parties with core access to redirect and intercept calls for any subscriber anywhere in the world. Unfortunately, increased interconnectivity with the SS7 network has led to a growing number of illicit call redirection attacks. We address such attacks with Sonar, a system that detects the presence of SS7 redirection attacks by securely measuring call audio round-trip times between telephony devices. This approach works because redirection attacks force calls to travel longer physical distances than usual, thereby creating longer end-to-end delay. We design and implement a distance bounding-inspired protocol that allows us to securely characterize the round-trip time between the two endpoints. We then use custom hardware deployed in 10 locations across the United States and a redirection testbed to characterize how distance affects round trip time in phone networks. We develop a model using this testbed and show Sonar is able to detect 70.9% of redirected calls between call endpoints of varying attacker proximity (300–7100 miles) with low false positive rates (0.3%). Finally, we ethically perform actual SS7 redirection attacks on our own devices with the help of an industry partner to demonstrate that Sonar detects 100% of such redirections in a real network (with no false positives). As such, we demonstrate that telephone users can reliably detect SS7 redirection attacks and protect the integrity of their calls.",
"title": ""
}
] |
scidocsrr
|
a5c96ee7a17e998288e8735bc7bcc63f
|
Human-Intent Detection and Physically Interactive Control of a Robot Without Force Sensors
|
[
{
"docid": "56316a77e260d8122c4812d684f4d223",
"text": "Manipulation fundamentally requires a manipulator to be mechanically coupled to the object being manipulated. A consideration of the physical constraints imposed by dynamic interaction shows that control of a vector quantity such as position or force is inadequate and that control of the manipulator impedance is also necessary. Techniques for control of manipulator behaviour are presented which result in a unified approach to kinematically constrained motion, dynamic interaction, target acquisition and obstacle avoidance.",
"title": ""
},
{
"docid": "9b1d9cc24177c040d165bdf1fee1459e",
"text": "This paper addresses the field of humanoid and personal robotics—its objectives, motivations, and technical problems. The approach described in the paper is based on the analysis of humanoid and personal robots as an evolution from industrial to advanced and service robotics driven by the need for helpful machines, as well as a synthesis of the dream of replicating humans. The first part of the paper describes the development of anthropomorphic components for humanoid robots, with particular regard to anthropomorphic sensors for vision and touch, an eight-d.o.f. arm, a three-fingered hand with sensorized fingertips, and control schemes for grasping. Then, the authors propose a user-oriented designmethodology for personal robots, anddescribe their experience in the design, development, and validation of a real personal robot composed of a mobile unit integrating some of the anthropomorphic components introduced previously and aimed at operating in a distributedworking environment. Based on the analysis of experimental results, the authors conclude that humanoid robotics is a tremendous and attractive technical and scientific challenge for robotics research. The real utility of humanoids has still to be demonstrated, but personal assistance can be envisaged as a promising application domain. Personal robotics also poses difficult technical problems, especially related to the need for achieving adequate safety, proper human–robot interaction, useful performance, and affordable cost. When these problems are solved, personal robots will have an excellent chance for significant application opportunities, especially if integrated into future home automation systems, and if supported by the availability of humanoid robots. © 2001 John Wiley & Sons, Inc.",
"title": ""
}
] |
[
{
"docid": "4c48aa985223ae9317c5f73361b5e7a3",
"text": "Low-dropout voltage regulators (LDOs) have been extensively used on-chip to supply voltage for various circuit blocks. Digital LDOs (DLDO) have recently attracted circuit designers for their low voltage operating capability and load current scalability. Existing DLDO techniques suffer from either poor transient performance due to slow digital control loop or poor DC load regulation due to low loop gain. A dual-loop architecture to improve the DC load regulation and transient performance is proposed in this work. The proposed regulator uses a fast control loop for improved transient response and an analog assisted dynamic reference correction loop for an improved DC load regulation. The design achieved a DC load regulation of 0.005mV/mA and a settling time of 139ns while regulating loads up to 200mA. The proposed DLDO is designed in 28nm FD-SOI technology with a 0.027mm2 active area.",
"title": ""
},
{
"docid": "b252aea38a537a22ab34fdf44e9443d2",
"text": "The objective of this study is to describe the case of a patient presenting advanced epidermoid carcinoma of the penis associated to myiasis. A 41-year-old patient presenting with a necrotic lesion of the distal third of the penis infested with myiasis was attended in the emergency room of our hospital and was submitted to an urgent penectomy. This is the first case of penile cancer associated to myiasis described in the literature. This case reinforces the need for educative campaigns to reduce the incidence of this disease in developing countries.",
"title": ""
},
{
"docid": "8c3ecd27a695fef2d009bbf627820a0d",
"text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.",
"title": ""
},
{
"docid": "d166f4cd01d22d7143487b691138023c",
"text": "Although Bitcoin is often perceived to be an anonymous currency, research has shown that a user’s Bitcoin transactions can be linked to compromise the user’s anonymity. We present solutions to the anonymity problem for both transactions on Bitcoin’s blockchain and off the blockchain (in so called micropayment channel networks). We use an untrusted third party to issue anonymous vouchers which users redeem for Bitcoin. Blind signatures and Bitcoin transaction contracts (aka smart contracts) ensure the anonymity and fairness during the bitcoin ↔ voucher exchange. Our schemes are practical, secure and anonymous.",
"title": ""
},
{
"docid": "2bd8a66a3e3cfafc9b13fd7ec47e86fc",
"text": "Psidium guajava Linn. (Guava) is used not only as food but also as folk medicine in subtropical areas around the world because of its pharmacologic activities. In particular, the leaf extract of guava has traditionally been used for the treatment of diabetes in East Asia and other countries. Many pharmacological studies have demonstrated the ability of this plant to exhibit antioxidant, hepatoprotective, anti-allergy, antimicrobial, antigenotoxic, antiplasmodial, cytotoxic, antispasmodic, cardioactive, anticough, antidiabetic, antiinflamatory and antinociceptive activities, supporting its traditional uses. Suggesting a wide range of clinical applications for the treatment of infantile rotaviral enteritis, diarrhoea and diabetes.",
"title": ""
},
{
"docid": "41aa05455471ecd660599f4ec285ff29",
"text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.",
"title": ""
},
{
"docid": "6154efdd165c7323c1ba9ec48e63cfc6",
"text": "A RANSAC based procedure is described for detecting inliers corresponding to multiple models in a given set of data points. The algorithm we present in this paper (called multiRANSAC) on average performs better than traditional approaches based on the sequential application of a standard RANSAC algorithm followed by the removal of the detected set of inliers. We illustrate the effectiveness of our approach on a synthetic example and apply it to the problem of identifying multiple world planes in pairs of images containing dominant planar structures.",
"title": ""
},
{
"docid": "b27224825bb28b9b8d0eea37f8900d42",
"text": "The use of Convolutional Neural Networks (CNN) in natural im age classification systems has produced very impressive results. Combined wit h the inherent nature of medical images that make them ideal for deep-learning, fu rther application of such systems to medical image classification holds much prom ise. However, the usefulness and potential impact of such a system can be compl etely negated if it does not reach a target accuracy. In this paper, we present a s tudy on determining the optimum size of the training data set necessary to achiev e igh classification accuracy with low variance in medical image classification s ystems. The CNN was applied to classify axial Computed Tomography (CT) imag es into six anatomical classes. We trained the CNN using six different sizes of training data set ( 5, 10, 20, 50, 100, and200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts G eneral Hospital (MGH) Picture Archiving and Communication System (PACS). U sing this data, we employ the learning curve approach to predict classificat ion ccuracy at a given training sample size. Our research will present a general me thodology for determining the training data set size necessary to achieve a cert in target classification accuracy that can be easily applied to other problems within such systems.",
"title": ""
},
{
"docid": "061e10ca5d2b4e807878e5eec0827b28",
"text": "Uplift modeling is a machine learning technique that aims to model treatment effects heterogeneity. It has been used in business and health sectors to predict the effect of a specific action on a given individual. Despite its advantages, uplift models show high sensitivity to noise and disturbance, which leads to unreliable results. In this paper we show different approaches to address the problem of uplift modeling, we demonstrate how disturbance in data can affect uplift measurement. We propose a new approach, we call it Pessimistic Uplift Modeling, that minimizes disturbance effects. We compared our approach with the existing uplift methods, on simulated and real datasets. The experiments show that our approach outperforms the existing approaches, especially in the case of high noise data environment.",
"title": ""
},
{
"docid": "f292b8666eb78e4d881777fee35123f7",
"text": "Abstract. We propose an approach to address data uncertainty for discrete optimization and network flow problems that allows controlling the degree of conservatism of the solution, and is computationally tractable both practically and theoretically. In particular, when both the cost coefficients and the data in the constraints of an integer programming problem are subject to uncertainty, we propose a robust integer programming problem of moderately larger size that allows controlling the degree of conservatism of the solution in terms of probabilistic bounds on constraint violation. When only the cost coefficients are subject to uncertainty and the problem is a 0 − 1 discrete optimization problem on n variables, then we solve the robust counterpart by solving at most n + 1 instances of the original problem. Thus, the robust counterpart of a polynomially solvable 0 − 1 discrete optimization problem remains polynomially solvable. In particular, robust matching, spanning tree, shortest path, matroid intersection, etc. are polynomially solvable. We also show that the robust counterpart of an NP -hard α-approximable 0 − 1 discrete optimization problem, remains α-approximable. Finally, we propose an algorithm for robust network flows that solves the robust counterpart by solving a polynomial number of nominal minimum cost flow problems in a modified network.",
"title": ""
},
{
"docid": "dd52742343462b3106c18274c143928b",
"text": "This paper presents a descriptive account of the social practices surrounding the iTunes music sharing of 13 participants in one organizational setting. Specifically, we characterize adoption, critical mass, and privacy; impression management and access control; the musical impressions of others that are created as a result of music sharing; the ways in which participants attempted to make sense of the dynamic system; and implications of the overlaid technical, musical, and corporate topologies. We interleave design implications throughout our results and relate those results to broader themes in a music sharing design space.",
"title": ""
},
{
"docid": "72d51fd4b384f4a9c3f6fe70606ab120",
"text": "Cloud Computing is a flexible, cost-effective, and proven delivery platform for providing business or consumer IT services over the Internet. However, cloud Computing presents an added level of risk because essential services are often outsourced to a third party, which makes it harder to maintain data security and privacy, support data and service availability, and demonstrate compliance. Cloud Computing leverages many technologies (SOA, virtualization, Web 2.0); it also inherits their security issues, which we discuss here, identifying the main vulnerabilities in this kind of systems and the most important threats found in the literature related to Cloud Computing and its environment as well as to identify and relate vulnerabilities and threats with possible solutions.",
"title": ""
},
{
"docid": "471579f955f8b68a357c8780a7775cc9",
"text": "In addition to practitioners who care for male patients, with the increased use of high-resolution anoscopy, practitioners who care for women are seeing more men in their practices as well. Some diseases affecting the penis can impact on their sexual partners. Many of the lesions and neoplasms of the penis occur on the vulva as well. In addition, there are common and rare lesions unique to the penis. A review of the scope of penile lesions and neoplasms that may present in a primary care setting is presented to assist in developing a differential diagnosis if such a patient is encountered, as well as for practitioners who care for their sexual partners. A familiarity will assist with recognition, as well as when consultation is needed.",
"title": ""
},
{
"docid": "47e11b1d734b1dcacc182e55d378f2a2",
"text": "Experience replay plays an important role in the success of deep reinforcement learning (RL) by helping stabilize the neural networks. It has become a new norm in deep RL algorithms. In this paper, however, we showcase that varying the size of the experience replay buffer can hurt the performance even in very simple tasks. The size of the replay buffer is actually a hyper-parameter which needs careful tuning. Moreover, our study of experience replay leads to the formulation of the Combined DQN algorithm, which can significantly outperform primitive DQN in some tasks.",
"title": ""
},
{
"docid": "efc341c0a3deb6604708b6db361bfba5",
"text": "In recent years, data analysis has become important with increasing data volume. Clustering, which groups objects according to their similarity, has an important role in data analysis. DBSCAN is one of the most effective and popular density-based clustering algorithm and has been successfully implemented in many areas. However, it is a challenging task to determine the input parameter values of DBSCAN algorithm which are neighborhood radius Eps and minimum number of points MinPts. The values of these parameters significantly affect clustering performance of the algorithm. In this study, we propose AE-DBSCAN algorithm which includes a new method to determine the value of neighborhood radius Eps automatically. The experimental evaluations showed that the proposed method outperformed the classical method.",
"title": ""
},
{
"docid": "7aa5bf782622f2f0247dce09dcb23077",
"text": "In the wake of the digital revolution we will see a dramatic transformation of our economy and societal institutions. While the benefits of this transformation can be massive, there are also tremendous risks. The fundaments of autonomous decision-making, human dignity, and democracy are shaking. After the automation of production processes and vehicle operation, the automation of society is next. This is moving us to a crossroads: we must decide between a society in which the actions are determined in a top-down way and then implemented by coercion or manipulative technologies or a society, in which decisions are taken in a free and participatory way. Modern information and communication systems enable both, but the latter has economic and strategic benefits.",
"title": ""
},
{
"docid": "170e7a72a160951e880f18295d100430",
"text": "In this paper, we introduce an embedding model, named CapsE, exploring a capsule network to model relationship triples (subject, relation, object). Our CapsE represents each triple as a 3-column matrix where each column vector represents the embedding of an element in the triple. This 3-column matrix is then fed to a convolution layer where multiple filters are operated to generate different feature maps. These feature maps are used to construct capsules in the first capsule layer. Capsule layers are connected via dynamic routing mechanism. The last capsule layer consists of only one capsule to produce a vector output. The length of this vector output is used to measure the plausibility of the triple. Our proposed CapsE obtains state-of-the-art link prediction results for knowledge graph completion on two benchmark datasets: WN18RR and FB15k-237, and outperforms strong search personalization baselines on SEARCH17 dataset.",
"title": ""
},
{
"docid": "f96db30bac65af7c9315fae0f9bb7b7e",
"text": "Combining MIMO with OFDM, it is possible to significantly reduce receiver complexity as OFDM greatly simplifies equalization at the receiver. MIMO-OFDM is currently being considered for a number of developing wireless standards; consequently, the study of MIMO-OFDM in realistic environments is of great importance. This paper describes an approach for prototyping a MIMO-OFDM systems using a flexible software defined radio (SDR) system architecture in conjunction with commercially available hardware. An emphasis on software permits a focus on algorithm and system design issues rather than implementation and hardware configuration. The penalty of this flexibility, however, is that the ease of use comes at the expense of overall throughput. To illustrate the benefits of the proposed architecture, applications to MIMO-OFDM system prototyping and preliminary MIMO channel measurements are presented. A detailed description of the hardware is provided along with downloadable software to reproduce the system.",
"title": ""
},
{
"docid": "9e7ec69d26ead38692ee0059980538c8",
"text": "A dynamic control system design has been a great demand in the control engineering community, with many applications particularly in the field of flight control. This paper presents investigations into the development of a dynamic nonlinear inverse-model based control of a twin rotor multi-input multi-output system (TRMS). The TRMS is an aerodynamic test rig representing the control challenges of modern air vehicle. A model inversion control with the developed adaptive model is applied to the system. An adaptive neuro-fuzzy inference system (ANFIS) is augmented with the control system to improve the control response. To demonstrate the applicability of the methods, a simulated hovering motion of the TRMS, derived from experimental data is considered in order to evaluate the tracking properties and robustness capacities of the inverse- model control technique.",
"title": ""
},
{
"docid": "df67da08931ed6d0d100ff857c2b1ced",
"text": "Parallel computers with tens of thousands of processors are typically programmed in a data parallel style, as opposed to the control parallel style used in multiprocessing. The success of data parallel algorithms—even on problems that at first glance seem inherently serial—suggests that this style of programming has much wider applicability than was previously thought.",
"title": ""
}
] |
scidocsrr
|
a933e9c1ab6140aac102052413b934b7
|
Increased nature relatedness and decreased authoritarian political views after psilocybin for treatment-resistant depression
|
[
{
"docid": "9ff6d7a36646b2f9170bd46d14e25093",
"text": "Psychedelic drugs such as LSD and psilocybin are often claimed to be capable of inducing life-changing experiences described as mystical or transcendental, especially if high doses are taken. The present study examined possible enduring effects of such experiences by comparing users of psychedelic drugs (n = 88), users of nonpsychedelic illegal drugs (e.g., marijuana, amphetamines) (n = 29) and non illicit drug-using social drinkers (n = 66) on questionnaire measures of values, beliefs and emotional empathy. Samples were obtained from Israel (n = 110) and Australia (n = 73) in a cross-cultural comparison to see if values associated with psychedelic drug use transcended culture of origin. Psychedelic users scored significantly higher on mystical beliefs (e.g., oneness with God and the universe) and life values of spirituality and concern for others than the other groups, and lower on the value of financial prosperity, irrespective of culture of origin. Users of nonpsychedelic illegal drugs scored significantly lower on a measure of coping ability than both psychedelic users and non illicit drug users. Both groups of illegal drug users scored significantly higher on empathy than non illicit drug users. Results are discussed in the context of earlier findings from Pahnke (1966) and Doblin (1991) of the transformative effect of psychedelic experiences, although the possibility remains that present findings reflect predrug characteristics of those who chose to take psychedelic drugs rather than effects of the drugs themselves.",
"title": ""
}
] |
[
{
"docid": "932934a4362bd671427954d0afb61459",
"text": "On the basis of the similarity between spinel and rocksalt structures, it is shown that some spinel oxides (e.g., MgCo2O4, etc) can be cathode materials for Mg rechargeable batteries around 150 °C. The Mg insertion into spinel lattices occurs via \"intercalation and push-out\" process to form a rocksalt phase in the spinel mother phase. For example, by utilizing the valence change from Co(III) to Co(II) in MgCo2O4, Mg insertion occurs at a considerably high potential of about 2.9 V vs. Mg2+/Mg, and similarly it occurs around 2.3 V vs. Mg2+/Mg with the valence change from Mn(III) to Mn(II) in MgMn2O4, being comparable to the ab initio calculation. The feasibility of Mg insertion would depend on the phase stability of the counterpart rocksalt XO of MgO in Mg2X2O4 or MgX3O4 (X = Co, Fe, Mn, and Cr). In addition, the normal spinel MgMn2O4 and MgCr2O4 can be demagnesiated to some extent owing to the robust host structure of Mg1-xX2O4, where the Mg extraction/insertion potentials for MgMn2O4 and MgCr2O4 are both about 3.4 V vs. Mg2+/Mg. Especially, the former \"intercalation and push-out\" process would provide a safe and stable design of cathode materials for polyvalent cations.",
"title": ""
},
{
"docid": "2833dbe3c3e576a3ba8f175a755b6964",
"text": "The accuracy and granularity of network flow measurement play a critical role in many network management tasks, especially for anomaly detection. Despite its important, traffic monitoring often introduces overhead to the network, thus, operators have to employ sampling and aggregation to avoid overloading the infrastructure. However, such sampled and aggregated information may affect the accuracy of traffic anomaly detection. In this work, we propose a novel method that performs adaptive zooming in the aggregation of flows to be measured. In order to better balance the monitoring overhead and the anomaly detection accuracy, we propose a prediction based algorithm that dynamically change the granularity of measurement along both the spatial and the temporal dimensions. To control the load on each individual switch, we carefully delegate monitoring rules in the network wide. Using real-world data and three simple anomaly detectors, we show that the adaptive based counting can detect anomalies more accurately with less overhead.",
"title": ""
},
{
"docid": "631b473342cc30360626eaea0734f1d8",
"text": "Argument extraction is the task of identifying arguments, along with their components in text. Arguments can be usually decomposed into a claim and one or more premises justifying it. The proposed approach tries to identify segments that represent argument elements (claims and premises) on social Web texts (mainly news and blogs) in the Greek language, for a small set of thematic domains, including articles on politics, economics, culture, various social issues, and sports. The proposed approach exploits distributed representations of words, extracted from a large non-annotated corpus. Among the novel aspects of this work is the thematic domain itself which relates to social Web, in contrast to traditional research in the area, which concentrates mainly on law documents and scientific publications. The huge increase of social web communities, along with their user tendency to debate, makes the identification of arguments in these texts a necessity. In addition, a new manually annotated corpus has been constructed that can be used freely for research purposes. Evaluation results are quite promising, suggesting that distributed representations can contribute positively to the task of argument extraction.",
"title": ""
},
{
"docid": "88a27616b16a0d643939a40685be12f1",
"text": "The water supply system has a high operational cost associated with its operations. This is characteristically due to the operations of the pumps that consume significantly high amount of electric energy. In order to minimize the electric energy consumption and reduce the maintenance cost of the system, this paper proposes the use of an Adaptive Weighted sum Genetic Algorithm (AWGA) in creating an optimal pump schedule, which can minimize the cost of electricity and satisfy the constraint of the maximum and minimum levels of water in the reservoir as well. The Adaptive weighted sum GA is based on popular weighted sum approach GA for multi-objective optimization problem wherein the weights multipliers of the individual fitness functions are adaptively selected. The algorithm has been tested using a hypothetical case study and promising results have been obtained and presented.",
"title": ""
},
{
"docid": "10ebda480df1157da5581b6219a9464a",
"text": "Our goal is to create a convenient natural language interface for performing wellspecified but complex actions such as analyzing data, manipulating text, and querying databases. However, existing natural language interfaces for such tasks are quite primitive compared to the power one wields with a programming language. To bridge this gap, we start with a core programming language and allow users to “naturalize” the core language incrementally by defining alternative, more natural syntax and increasingly complex concepts in terms of compositions of simpler ones. In a voxel world, we show that a community of users can simultaneously teach a common system a diverse language and use it to build hundreds of complex voxel structures. Over the course of three days, these users went from using only the core language to using the naturalized language in 85.9% of the last 10K utterances.",
"title": ""
},
{
"docid": "fe16f2d946b3ea7bc1169d5667365dbe",
"text": "This study assessed embodied simulation via electromyography (EMG) as participants first encoded emotionally ambiguous faces with emotion concepts (i.e., \"angry,\"\"happy\") and later passively viewed the faces without the concepts. Memory for the faces was also measured. At initial encoding, participants displayed more smiling-related EMG activity in response to faces paired with \"happy\" than in response to faces paired with \"angry.\" Later, in the absence of concepts, participants remembered happiness-encoded faces as happier than anger-encoded faces. Further, during passive reexposure to the ambiguous faces, participants' EMG indicated spontaneous emotion-specific mimicry, which in turn predicted memory bias. No specific EMG activity was observed when participants encoded or viewed faces with non-emotion-related valenced concepts, or when participants encoded or viewed Chinese ideographs. From an embodiment perspective, emotion simulation is a measure of what is currently perceived. Thus, these findings provide evidence of genuine concept-driven changes in emotion perception. More generally, the findings highlight embodiment's role in the representation and processing of emotional information.",
"title": ""
},
{
"docid": "ba9de90efb41ef69e64a6880e420e0ac",
"text": "The emergence of chronic inflammation during obesity in the absence of overt infection or well-defined autoimmune processes is a puzzling phenomenon. The Nod-like receptor (NLR) family of innate immune cell sensors, such as the nucleotide-binding domain, leucine-rich–containing family, pyrin domain–containing-3 (Nlrp3, but also known as Nalp3 or cryopyrin) inflammasome are implicated in recognizing certain nonmicrobial originated 'danger signals' leading to caspase-1 activation and subsequent interleukin-1β (IL-1β) and IL-18 secretion. We show that calorie restriction and exercise-mediated weight loss in obese individuals with type 2 diabetes is associated with a reduction in adipose tissue expression of Nlrp3 as well as with decreased inflammation and improved insulin sensitivity. We further found that the Nlrp3 inflammasome senses lipotoxicity-associated increases in intracellular ceramide to induce caspase-1 cleavage in macrophages and adipose tissue. Ablation of Nlrp3 in mice prevents obesity-induced inflammasome activation in fat depots and liver as well as enhances insulin signaling. Furthermore, elimination of Nlrp3 in obese mice reduces IL-18 and adipose tissue interferon-γ (IFN-γ) expression, increases naive T cell numbers and reduces effector T cell numbers in adipose tissue. Collectively, these data establish that the Nlrp3 inflammasome senses obesity-associated danger signals and contributes to obesity-induced inflammation and insulin resistance.",
"title": ""
},
{
"docid": "3c3e377d9e06499549e4de8e13e39612",
"text": "A plastic ankle foot orthosis (AFO) was developed, referred to as functional ankle foot orthosis Type 2 (FAFO (II)), which can deal with genu recurvatum and the severe spastic foot in walking. Clinical trials were successful for all varus and drop feet, and for most cases of genu recurvatum. Electromyogram studies showed that the FAFO (II) reduced the spasticity of gastrocnemius and hamstring muscles and activated the quadricep muscles. Gait analysis revealed a reduction of the knee angles in the stance phase on the affected side when using the FAFO (II). Mechanical stress tests showed excellent durability of the orthosis and demonstrated its effectiveness for controlling spasticity in comparison with other types of plastic AFOs.",
"title": ""
},
{
"docid": "1a23c0ed6aea7ba2cf4d3021de4cfa8b",
"text": "This article focuses on the traffic coordination problem at traffic intersections. We present a decentralized coordination approach, combining optimal control with model-based heuristics. We show how model-based heuristics can lead to low-complexity solutions that are suitable for a fast online implementation, and analyze its properties in terms of efficiency, feasibility and optimality. Finally, simulation results for different scenarios are also presented.",
"title": ""
},
{
"docid": "af78c57378a472c8f7be4eb354feb442",
"text": "Mutations in the human sonic hedgehog gene ( SHH) are the most frequent cause of autosomal dominant inherited holoprosencephaly (HPE), a complex brain malformation resulting from incomplete cleavage of the developing forebrain into two separate hemispheres and ventricles. Here we report the clinical and molecular findings in five unrelated patients with HPE and their relatives with an identified SHH mutation. Three new and one previously reported SHH mutations were identified, a fifth proband was found to carry a reciprocal subtelomeric rearrangement involving the SHH locus in 7q36. An extremely wide intrafamilial phenotypic variability was observed, ranging from the classical phenotype with alobar HPE accompanied by typical severe craniofacial abnormalities to very mild clinical signs of choanal stenosis or solitary median maxillary central incisor (SMMCI) only. Two families were initially ascertained because of microcephaly in combination with developmental delay and/or mental retardation and SMMCI, the latter being a frequent finding in patients with an identified SHH mutation. In other affected family members a delay in speech acquisition and learning disabilities were the leading clinical signs. Conclusion: mutational analysis of the sonic hedgehog gene should not only be considered in patients presenting with the classical holoprosencephaly phenotype but also in those with two or more clinical signs of the wide phenotypic spectrum of associated abnormalities, especially in combination with a positive family history.",
"title": ""
},
{
"docid": "a0ca6986d59905cea49ed28fa378c69e",
"text": "The epidemic of type 2 diabetes and impaired glucose tolerance is one of the main causes of morbidity and mortality worldwide. In both disorders, tissues such as muscle, fat and liver become less responsive or resistant to insulin. This state is also linked to other common health problems, such as obesity, polycystic ovarian disease, hyperlipidaemia, hypertension and atherosclerosis. The pathophysiology of insulin resistance involves a complex network of signalling pathways, activated by the insulin receptor, which regulates intermediary metabolism and its organization in cells. But recent studies have shown that numerous other hormones and signalling events attenuate insulin action, and are important in type 2 diabetes.",
"title": ""
},
{
"docid": "7604835dc6d7927880abcf7b91b5c405",
"text": "The computational modeling of emotion has been an area of growing interest in cognitive robotics research in recent years, but also a source of contention regarding how to conceive of emotion and how to model it. In this paper, emotion is characterized as (a) closely connected to embodied cognition, (b) grounded in homeostatic bodily regulation, and (c) a powerful organizational principle—affective modulation of behavioral and cognitive mechanisms—that is ‘useful’ in both biological brains and robotic cognitive architectures. We elaborate how emotion theories and models centered on core neurological structures in the mammalian brain, and inspired by embodied, dynamical, and enactive approaches in cognitive science, may impact on computational and robotic modeling. In light of the theoretical discussion, work in progress on the development of an embodied cognitive-affective architecture for robots is presented, incorporating aspects of the theories discussed.",
"title": ""
},
{
"docid": "5f3b787993ae1ebae34d8cee3ba1a975",
"text": "Neisseria meningitidis remains an important cause of severe sepsis and meningitis worldwide. The bacterium is only found in human hosts, and so must continually coexist with the immune system. Consequently, N meningitidis uses multiple mechanisms to avoid being killed by antimicrobial proteins, phagocytes, and, crucially, the complement system. Much remains to be learnt about the strategies N meningitidis employs to evade aspects of immune killing, including mimicry of host molecules by bacterial structures such as capsule and lipopolysaccharide, which poses substantial problems for vaccine design. To date, available vaccines only protect individuals against subsets of meningococcal strains. However, two promising vaccines are currently being assessed in clinical trials and appear to offer good prospects for an effective means of protecting individuals against endemic serogroup B disease, which has proven to be a major challenge in vaccine research.",
"title": ""
},
{
"docid": "b045350bfb820634046bff907419d1bf",
"text": "Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.",
"title": ""
},
{
"docid": "ecd486fabd206ad8c28ea9d9da8cd0ee",
"text": "The prevailing binding of SOAP to HTTP specifies that SOAP messages be encoded as an XML 1.0 document which is then sent between client and server. XML processing however can be slow and memory intensive, especially for scientific data, and consequently SOAP has been regarded as an inappropriate protocol for scientific data. Efficiency considerations thus lead to the prevailing practice of separating data from the SOAP control channel. Instead, it is stored in specialized binary formats and transmitted either via attachments or indirectly via a file sharing mechanism, such as GridFTP or HTTP. This separation invariably complicates development due to the multiple libraries and type systems to be handled; furthermore it suffers from performance issues, especially when handling small binary data. As an alternative solution, binary XML provides a highly efficient encoding scheme for binary data in the XML and SOAP messages, and with it we can gain high performance as well as unifying the development environment without unduly impacting the Web service protocol stack. In this paper we present our implementation of a generic SOAP engine that supports both textual XML and binary XML as the encoding scheme of the message. We also present our binary XML data model and encoding scheme. Our experiments show that for scientific applications binary XML together with the generic SOAP implementation not only ease development, but also provide better performance and are more widely applicable than the commonly used separated schemes",
"title": ""
},
{
"docid": "e9f19d60dfa80d34ca4db370080b977d",
"text": "This paper reviews three recent books on data mining written from three different perspectives, i.e. databases, machine learning, and statistics. Although the exploration in this paper is suggestive instead of conclusive, it reveals that besides some common properties, different perspectives lay strong emphases on different aspects of data mining. The emphasis of the database perspective is on efficiency because this perspective strongly concerns the whole discovery process and huge data volume. The emphasis of the machine learning perspective is on effectiveness because this perspective is heavily attracted by substantive heuristics working well in data analysis although they may not always be useful. As for the statistics perspective, its emphasis is on validity because this perspective cares much for mathematical soundness behind mining methods.",
"title": ""
},
{
"docid": "243d1dc8df4b8fbd37cc347a6782a2b5",
"text": "This paper introduces a framework for`curious neural controllers' which employ an adaptive world model for goal directed on-line learning. First an on-line reinforcement learning algorithm for autonomousànimats' is described. The algorithm is based on two fully recurrent`self-supervised' continually running networks which learn in parallel. One of the networks learns to represent a complete model of the environmental dynamics and is called thèmodel network'. It provides completècredit assignment paths' into the past for the second network which controls the animats physical actions in a possibly reactive environment. The an-imats goal is to maximize cumulative reinforcement and minimize cumulativèpain'. The algorithm has properties which allow to implement something like the desire to improve the model network's knowledge about the world. This is related to curiosity. It is described how the particular algorithm (as well as similar model-building algorithms) may be augmented by dynamic curiosity and boredom in a natural manner. This may be done by introducing (delayed) reinforcement for actions that increase the model network's knowledge about the world. This in turn requires the model network to model its own ignorance, thus showing a rudimentary form of self-introspective behavior.",
"title": ""
},
{
"docid": "709aa1bc4ace514e46f7edbb07fb03a9",
"text": "Empirical scoring functions based on either molecular force fields or cheminformatics descriptors are widely used, in conjunction with molecular docking, during the early stages of drug discovery to predict potency and binding affinity of a drug-like molecule to a given target. These models require expert-level knowledge of physical chemistry and biology to be encoded as hand-tuned parameters or features rather than allowing the underlying model to select features in a data-driven procedure. Here, we develop a general 3-dimensional spatial convolution operation for learning atomic-level chemical interactions directly from atomic coordinates and demonstrate its application to structure-based bioactivity prediction. The atomic convolutional neural network is trained to predict the experimentally determined binding affinity of a protein-ligand complex by direct calculation of the energy associated with the complex, protein, and ligand given the crystal structure of the binding pose. Non-covalent interactions present in the complex that are absent in the protein-ligand sub-structures are identified and the model learns the interaction strength associated with these features. We test our model by predicting the binding free energy of a subset of protein-ligand complexes found in the PDBBind dataset and compare with state-of-the-art cheminformatics and machine learning-based approaches. We find that all methods achieve experimental accuracy (less than 1 kcal/mol mean absolute error) and that atomic convolutional networks either outperform or perform competitively with the cheminformatics based methods. Unlike all previous protein-ligand prediction systems, atomic convolutional networks are end-to-end and fully-differentiable. They represent a new data-driven, physics-based deep learning model paradigm that offers a strong foundation for future improvements in structure-based bioactivity prediction.",
"title": ""
},
{
"docid": "74fcade8e5f5f93f3ffa27c4d9130b9f",
"text": "Resampling is an important signature of manipulated images. In this paper, we propose two methods to detect and localize image manipulations based on a combination of resampling features and deep learning. In the first method, the Radon transform of resampling features are computed on overlapping image patches. Deep learning classifiers and a Gaussian conditional random field model are then used to create a heatmap. Tampered regions are located using a Random Walker segmentation method. In the second method, resampling features computed on overlapping image patches are passed through a Long short-term memory (LSTM) based network for classification and localization. We compare the performance of detection/localization of both these methods. Our experimental results show that both techniques are effective in detecting and localizing digital image forgeries.",
"title": ""
},
{
"docid": "80383246c35226231b4f136c6cc0019b",
"text": "How to automatically monitor wide critical open areas is a challenge to be addressed. Recent computer vision algorithms can be exploited to avoid the deployment of a large amount of expensive sensors. In this work, we propose our object tracking system which, combined with our recently developed anomaly detection system. can provide intelligence and protection for critical areas. In this work. we report two case studies: an international pier and a city parking lot. We acquire sequences to evaluate the effectiveness of the approach in challenging conditions. We report quantitative results for object counting, detection, parking analysis, and anomaly detection. Moreover, we report state-of-the-art results for statistical anomaly detection on a public dataset.",
"title": ""
}
] |
scidocsrr
|
4cb70dbe54b21485773023fd942ae7de
|
Service-Dominant Strategic Sourcing: Value Creation Versus Cost Saving
|
[
{
"docid": "dd62fd669d40571cc11d64789314dba1",
"text": "It took the author 30 years to develop the Viable System Model, which sets out to explain how systems are viable – that is, capable of independent existence. He wanted to elucidate the laws of viability in order to facilitate the management task, and did so in a stream of papers and three (of his ten) books. Much misunderstanding about the VSM and its use seems to exist; especially its methodological foundations have been largely forgotten, while its major results have hardly been noted. This paper reflects on the history, nature and present status of the VSM, without seeking once again to expound the model in detail or to demonstrate its validity. It does, however, provide a synopsis, present the methodology and confront some highly contentious issues about both the managerial and scientific paradigms.",
"title": ""
}
] |
[
{
"docid": "9b0f286b03b3d81942747a98ac0e8817",
"text": "Automated recommendations for next tracks to listen to or to include in a playlist are a common feature on modern music platforms. Correspondingly, a variety of algorithmic approaches for determining tracks to recommend have been proposed in academic research. The most sophisticated among them are often based on conceptually complex learning techniques which can also require substantial computational resources or special-purpose hardware like GPUs. Recent research, however, showed that conceptually more simple techniques, e.g., based on nearest-neighbor schemes, can represent a viable alternative to such techniques in practice.\n In this paper, we describe a hybrid technique for next-track recommendation, which was evaluated in the context of the ACM RecSys 2018 Challenge. A combination of nearest-neighbor techniques, a standard matrix factorization algorithm, and a small set of heuristics led our team KAENEN to the 3rd place in the \"creative\" track and the 7th one in the \"main\" track, with accuracy results only a few percent below the winning teams. Given that offline prediction accuracy is only one of several possible quality factors in music recommendation, practitioners have to validate if slight accuracy improvements truly justify the use of highly complex algorithms in real-world applications.",
"title": ""
},
{
"docid": "4174c1d49ff8755c6b82c2b453918d29",
"text": "Top-k error is currently a popular performance measure on large scale image classification benchmarks such as ImageNet and Places. Despite its wide acceptance, our understanding of this metric is limited as most of the previous research is focused on its special case, the top-1 error. In this work, we explore two directions that shed more light on the top-k error. First, we provide an in-depth analysis of established and recently proposed single-label multiclass methods along with a detailed account of efficient optimization algorithms for them. Our results indicate that the softmax loss and the smooth multiclass SVM are surprisingly competitive in top-k error uniformly across all k, which can be explained by our analysis of multiclass top-k calibration. Further improvements for a specific k are possible with a number of proposed top-k loss functions. Second, we use the top-k methods to explore the transition from multiclass to multilabel learning. In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant. Finally, our contribution of efficient algorithms for training with the considered top-k and multilabel loss functions is of independent interest.",
"title": ""
},
{
"docid": "e6dcc8f80b5b6528531b7f6e617cd633",
"text": "Over 2 million military and civilian personnel per year (over 1 million in the United States) are occupationally exposed, respectively, to jet propulsion fuel-8 (JP-8), JP-8 +100 or JP-5, or to the civil aviation equivalents Jet A or Jet A-1. Approximately 60 billion gallon of these kerosene-based jet fuels are annually consumed worldwide (26 billion gallon in the United States), including over 5 billion gallon of JP-8 by the militaries of the United States and other NATO countries. JP-8, for example, represents the largest single chemical exposure in the U.S. military (2.53 billion gallon in 2000), while Jet A and A-1 are among the most common sources of nonmilitary occupational chemical exposure. Although more recent figures were not available, approximately 4.06 billion gallon of kerosene per se were consumed in the United States in 1990 (IARC, 1992). These exposures may occur repeatedly to raw fuel, vapor phase, aerosol phase, or fuel combustion exhaust by dermal absorption, pulmonary inhalation, or oral ingestion routes. Additionally, the public may be repeatedly exposed to lower levels of jet fuel vapor/aerosol or to fuel combustion products through atmospheric contamination, or to raw fuel constituents by contact with contaminated groundwater or soil. Kerosene-based hydrocarbon fuels are complex mixtures of up to 260+ aliphatic and aromatic hydrocarbon compounds (C(6) -C(17+); possibly 2000+ isomeric forms), including varying concentrations of potential toxicants such as benzene, n-hexane, toluene, xylenes, trimethylpentane, methoxyethanol, naphthalenes (including polycyclic aromatic hydrocarbons [PAHs], and certain other C(9)-C(12) fractions (i.e., n-propylbenzene, trimethylbenzene isomers). While hydrocarbon fuel exposures occur typically at concentrations below current permissible exposure limits (PELs) for the parent fuel or its constituent chemicals, it is unknown whether additive or synergistic interactions among hydrocarbon constituents, up to six performance additives, and other environmental exposure factors may result in unpredicted toxicity. While there is little epidemiological evidence for fuel-induced death, cancer, or other serious organic disease in fuel-exposed workers, large numbers of self-reported health complaints in this cohort appear to justify study of more subtle health consequences. A number of recently published studies reported acute or persisting biological or health effects from acute, subchronic, or chronic exposure of humans or animals to kerosene-based hydrocarbon fuels, to constituent chemicals of these fuels, or to fuel combustion products. This review provides an in-depth summary of human, animal, and in vitro studies of biological or health effects from exposure to JP-8, JP-8 +100, JP-5, Jet A, Jet A-1, or kerosene.",
"title": ""
},
{
"docid": "79079ee1e352b997785dc0a85efed5e4",
"text": "Automatic recognition of the historical letters (XI-XVIII centuries) carved on the stoned walls of St.Sophia cathedral in Kyiv (Ukraine) was demonstrated by means of capsule deep learning neural network. It was applied to the image dataset of the carved Glagolitic and Cyrillic letters (CGCL), which was assembled and pre-processed recently for recognition and prediction by machine learning methods. CGCL dataset contains >4000 images for glyphs of 34 letters which are hardly recognized by experts even in contrast to notMNIST dataset with the better images of 10 letters taken from different fonts. The capsule network was applied for both datasets in three regimes: without data augmentation, with lossless data augmentation, and lossy data augmentation. Despite the much worse quality of CGCL dataset and extremely low number of samples (in comparison to notMNIST dataset) the capsule network model demonstrated much better results than the previously used convolutional neural network (CNN). The training rate for capsule network model was 5-6 times higher than for CNN. The validation accuracy (and validation loss) was higher (lower) for capsule network model than for CNN without data augmentation even. The area under curve (AUC) values for receiver operating characteristic (ROC) were also higher for the capsule network model than for CNN model: 0.88-0.93 (capsule network) and 0.50 (CNN) without data augmentation, 0.91-0.95 (capsule network) and 0.51 (CNN) with lossless data augmentation, and similar results of 0.91-0.93 (capsule network) and 0.9 (CNN) in the regime of lossless data augmentation only. The confusion matrixes were much better for capsule network than for CNN model and gave the much lower type I (false positive) and type II (false negative) values in all three regimes of data augmentation. These results supports the previous claims that capsule-like networks allow to reduce error rates not only on MNIST digit dataset, but on the other notMNIST letter dataset and the more complex CGCL handwriting graffiti letter dataset also. Moreover, capsule-like networks allow to reduce training set sizes to 180 images even like in this work, and they are considerably better than CNNs on the highly distorted and incomplete letters even like CGCL handwriting graffiti. Keywords— machine learning, deep learning, capsule neural network, stone carving dataset, notMNIST, data augmentation",
"title": ""
},
{
"docid": "5ec1cff52a55c5bd873b5d0d25e0456b",
"text": "This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on WordNet. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.",
"title": ""
},
{
"docid": "0a58aa0c5dff94efa183fcf6fb7952f6",
"text": "When people explore new environments they often use landmarks as reference points to help navigate and orientate themselves. This research paper examines how spatial datasets can be used to build a system for use in an urban environment which functions as a city guide, announcing Features of Interest (FoI) as they become visible to the user (not just proximal), as the user moves freely around the city. Visibility calculations for the FoIs were pre-calculated based on a digital surface model derived from LIDAR (Light Detection and Ranging) data. The results were stored in a textbased relational database management system (RDBMS) for rapid retrieval. All interaction between the user and the system was via a speech-based interface, allowing the user to record and request further information on any of the announced FoI. A prototype system, called Edinburgh Augmented Reality System (EARS) , was designed, implemented and field tested in order to assess the effectiveness of these ideas. The application proved to be an innovating, ‘non-invasive’ approach to augmenting the user’s reality",
"title": ""
},
{
"docid": "4d1be9aebf7534cce625b95bde4696c6",
"text": "BlockChain (BC) has attracted tremendous attention due to its immutable nature and the associated security and privacy benefits. BC has the potential to overcome security and privacy challenges of Internet of Things (IoT). However, BC is computationally expensive, has limited scalability and incurs significant bandwidth overheads and delays which are not suited to the IoT context. We propose a tiered Lightweight Scalable BC (LSB) that is optimized for IoT requirements. We explore LSB in a smart home setting as a representative example for broader IoT applications. Low resource devices in a smart home benefit from a centralized manager that establishes shared keys for communication and processes all incoming and outgoing requests. LSB achieves decentralization by forming an overlay network where high resource devices jointly manage a public BC that ensures end-to-end privacy and security. The overlay is organized as distinct clusters to reduce overheads and the cluster heads are responsible for managing the public BC. LSB incorporates several optimizations which include algorithms for lightweight consensus, distributed trust and throughput management. Qualitative arguments demonstrate that LSB is resilient to several security attacks. Extensive simulations show that LSB decreases packet overhead and delay and increases BC scalability compared to relevant baselines.",
"title": ""
},
{
"docid": "5c29083624be58efa82b4315976f8dc2",
"text": "This paper presents a structured ordinal measure method for video-based face recognition that simultaneously lear ns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space . The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization metho d is employed to handle the discrete and low-rank constraints , yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition ra tes using fewer features and samples.",
"title": ""
},
{
"docid": "471af6726ec78126fcf46f4e42b666aa",
"text": "A new thermal tuning circuit for optical ring modulators enables demonstration of an optical chip-to-chip link for the first time with monolithically integrated photonic devices in a commercial 45nm SOI process, without any process changes. The tuning circuit uses independent 1/0 level-tracking and 1/0 bit counting to remain resilient against laser self-heating transients caused by non-DC-balanced transmit data. A 30fJ/bit transmitter and 374fJ/bit receiver with 6μApk-pk photocurrent sensitivity complete the 5Gb/s link. The thermal tuner consumes 275fJ/bit and achieves a 600 GHz tuning range with a heater tuning efficiency of 3.8μW/GHz.",
"title": ""
},
{
"docid": "24a10176ec2367a6a0b5333d57b894b8",
"text": "Automated classification of biological cells according to their 3D morphology is highly desired in a flow cytometer setting. We have investigated this possibility experimentally and numerically using a diffraction imaging approach. A fast image analysis software based on the gray level co-occurrence matrix (GLCM) algorithm has been developed to extract feature parameters from measured diffraction images. The results of GLCM analysis and subsequent classification demonstrate the potential for rapid classification among six types of cultured cells. Combined with numerical results we show that the method of diffraction imaging flow cytometry has the capacity as a platform for high-throughput and label-free classification of biological cells.",
"title": ""
},
{
"docid": "9edfedc5a1b17481ee8c16151cf42c88",
"text": "Nevus comedonicus is considered a genodermatosis characterized by the presence of multiple groups of dilated pilosebaceous orifices filled with black keratin plugs, with sharply unilateral distribution mostly on the face, neck, trunk, upper arms. Lesions can appear at any age, frequently before the age of 10 years, but they are usually present at birth. We present a 2.7-year-old girl with a very severe form of nevus comedonicus. She exhibited lesions located initially at the left side of the body with a linear characteristic, following Blascko lines T1/T2, T5, T7, S1 /S2, but progressively developed lesions on the right side of the scalp and left gluteal area.",
"title": ""
},
{
"docid": "bdbd3d65c79e4f22d2e85ac4137ee67a",
"text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",
"title": ""
},
{
"docid": "3e9a214856235ef36a4dd2e9684543b7",
"text": "Leaf area index (LAI) is a key biophysical variable that can be used to derive agronomic information for field management and yield prediction. In the context of applying broadband and high spatial resolution satellite sensor data to agricultural applications at the field scale, an improved method was developed to evaluate commonly used broadband vegetation indices (VIs) for the estimation of LAI with VI–LAI relationships. The evaluation was based on direct measurement of corn and potato canopies and on QuickBird multispectral images acquired in three growing seasons. The selected VIs were correlated strongly with LAI but with different efficiencies for LAI estimation as a result of the differences in the stabilities, the sensitivities, and the dynamic ranges. Analysis of error propagation showed that LAI noise inherent in each VI–LAI function generally increased with increasing LAI and the efficiency of most VIs was low at high LAI levels. Among selected VIs, the modified soil-adjusted vegetation index (MSAVI) was the best LAI estimator with the largest dynamic range and the highest sensitivity and overall efficiency for both crops. QuickBird image-estimated LAI with MSAVI–LAI relationships agreed well with ground-measured LAI with the root-mean-square-error of 0.63 and 0.79 for corn and potato canopies, respectively. LAI estimated from the high spatial resolution pixel data exhibited spatial variability similar to the ground plot measurements. For field scale agricultural applications, MSAVI–LAI relationships are easy-to-apply and reasonably accurate for estimating LAI. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2848635e59cf2a41871d79748822c176",
"text": "The ventral pathway is involved in primate visual object recognition. In humans, a central stage in this pathway is an occipito–temporal region termed the lateral occipital complex (LOC), which is preferentially activated by visual objects compared to scrambled images or textures. However, objects have characteristic attributes (such as three-dimensional shape) that can be perceived both visually and haptically. Therefore, object-related brain areas may hold a representation of objects in both modalities. Using fMRI to map object-related brain regions, we found robust and consistent somatosensory activation in the occipito–temporal cortex. This region showed clear preference for objects compared to textures in both modalities. Most somatosensory object-selective voxels overlapped a part of the visual object-related region LOC. Thus, we suggest that neuronal populations in the occipito–temporal cortex may constitute a multimodal object-related network.",
"title": ""
},
{
"docid": "9960d17cb019350a279e4daccccb8e87",
"text": "Deep learning with neural networks is applied by an increasing number of people outside of classic research environments, due to the vast success of the methodology on a wide range of machine perception tasks. While this interest is fueled by beautiful success stories, practical work in deep learning on novel tasks without existing baselines remains challenging. This paper explores the specific challenges arising in the realm of real world tasks, based on case studies from research & development in conjunction with industry, and extracts lessons learned from them. It thus fills a gap between the publication of latest algorithmic and methodical developments, and the usually omitted nitty-gritty of how to make them work. Specifically, we give insight into deep learning projects on face matching, print media monitoring, industrial quality control, music scanning, strategy game playing, and automated machine learning, thereby providing best practices for deep learning in practice.",
"title": ""
},
{
"docid": "a2e0163aebb348d3bfab7ebac119e0c0",
"text": "Herein we report the first study of the oxygen reduction reaction (ORR) catalyzed by a cofacial porphyrin scaffold accessed in high yield (overall 53%) using coordination-driven self-assembly with no chromatographic purification steps. The ORR activity was investigated using chemical and electrochemical techniques on monomeric cobalt(II) tetra(meso-4-pyridyl)porphyrinate (CoTPyP) and its cofacial analogue [Ru8(η6-iPrC6H4Me)8(dhbq)4(CoTPyP)2][OTf]8 (Co Prism) (dhbq = 2,5-dihydroxy-1,4-benzoquinato, OTf = triflate) as homogeneous oxygen reduction catalysts. Co Prism is obtained in one self-assembly step that organizes six total building blocks, two CoTPyP units and four arene-Ru clips, into a cofacial motif previously demonstrated with free-base, Zn(II), and Ni(II) porphyrins. Turnover frequencies (TOFs) from chemical reduction (66 vs 6 h-1) and rate constants of overall homogeneous catalysis (kobs) determined from rotating ring-disk experiments (1.1 vs 0.05 h-1) establish a cofacial enhancement upon comparison of the activities of Co Prism and CoTPyP, respectively. Cyclic voltammetry was used to initially probe the electrochemical catalytic behavior. Rotating ring-disk electrode studies were completed to probe the Faradaic efficiency and obtain an estimate of the rate constant associated with the ORR.",
"title": ""
},
{
"docid": "c1632ead357d08c3e019bb12ff75e756",
"text": "Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred to as network embedding, and it has attracted significant attention in recent years. In this article, we briefly review the existing network embedding methods by two taxonomies. The technical taxonomy focuses on the specific techniques used and divides the existing network embedding methods into two stages, i.e., context construction and objective design. The non-technical taxonomy focuses on the problem setting aspect and categorizes existing work based on whether to preserve special network properties, to consider special network types, or to incorporate additional inputs. Finally, we summarize the main findings based on the two taxonomies, analyze their usefulness, and discuss future directions in this area.",
"title": ""
},
{
"docid": "a34825f20b645a146857c1544c08e66e",
"text": "1. The midterm will have about 5-6 long questions, and about 8-10 short questions. Space will be provided on the actual midterm for you to write your answers. 2. The midterm is meant to be educational, and as such some questions could be quite challenging. Use your time wisely to answer as much as you can! 3. For additional practice, please see CS 229 extra problem sets available at 1. [13 points] Generalized Linear Models Recall that generalized linear models assume that the response variable y (conditioned on x) is distributed according to a member of the exponential family: p(y; η) = b(y) exp(ηT (y) − a(η)), where η = θ T x. For this problem, we will assume η ∈ R. (a) [10 points] Given a training set {(x (i) , y (i))} m i=1 , the loglikelihood is given by (θ) = m i=1 log p(y (i) | x (i) ; θ). Give a set of conditions on b(y), T (y), and a(η) which ensure that the loglikelihood is a concave function of θ (and thus has a unique maximum). Your conditions must be reasonable, and should be as weak as possible. (E.g., the answer \" any b(y), T (y), and a(η) so that (θ) is concave \" is not reasonable. Similarly, overly narrow conditions, including ones that apply only to specific GLMs, are also not reasonable.) (b) [3 points] When the response variable is distributed according to a Normal distribution (with unit variance), we have b(y) = 1 √ 2π e −y 2 2 , T (y) = y, and a(η) = η 2 2. Verify that the condition(s) you gave in part (a) hold for this setting.",
"title": ""
},
{
"docid": "e3823047ccc723783cf05f24ca60d449",
"text": "Social science studies have acknowledged that the social influence of individuals is not identical. Social networks structure and shared text can reveal immense information about users, their interests, and topic-based influence. Although some studies have considered measuring user influence, less has been on measuring and estimating topic-based user influence. In this paper, we propose an approach that incorporates network structure, user-generated content for topic-based influence measurement, and user’s interactions in the network. We perform experimental analysis on Twitter data and show that our proposed approach can effectively measure topic-based user influence.",
"title": ""
},
{
"docid": "ec9eb309dd9d6f72bd7286580e75d36d",
"text": "This paper describes SONDY, a tool for analysis of trends and dynamics in online social network data. SONDY addresses two audiences: (i) end-users who want to explore social activity and (ii) researchers who want to experiment and compare mining techniques on social data. SONDY helps end-users like media analysts or journalists understand social network users interests and activity by providing emerging topics and events detection as well as network analysis functionalities. To this end, the application proposes visualizations such as interactive time-lines that summarize information and colored user graphs that reflect the structure of the network. SONDY also provides researchers an easy way to compare and evaluate recent techniques to mine social data, implement new algorithms and extend the application without being concerned with how to make it accessible. In the demo, participants will be invited to explore information from several datasets of various sizes and origins (such as a dataset consisting of 7,874,772 messages published by 1,697,759 Twitter users during a period of 7 days) and apply the different functionalities of the platform in real-time.",
"title": ""
}
] |
scidocsrr
|
901fbc60be4ba1bc1ae7b59755786123
|
CIPA: A collaborative intrusion prevention architecture for programmable network and SDN
|
[
{
"docid": "a9b20ad74b3a448fbc1555b27c4dcac9",
"text": "A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behaviour of the errorfunction. In substantial difference to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforseeable influence of the size of the derivative but only dependent on the temporal behaviour of its sign. This leads to an efficient and transparent adaptation process. The promising capabilities of RPROP are shown in comparison to other wellknown adaptive techniques.",
"title": ""
}
] |
[
{
"docid": "e19445c2ea8e19002a85ec9ace463990",
"text": "In this paper we propose a system that takes attendance of student and maintaining its records in an academic institute automatically. Manually taking the attendance and maintaining it for a long time makes it difficult task as well as wastes a lot of time. For this reason an efficient system is designed. This system takes attendance with the help of a fingerprint sensor module and all the records are saved on a computer. Fingerprint sensor module and LCD screen are dynamic which can move in the room. In order to mark the attendance, student has to place his/her finger on the fingerprint sensor module. On identification of particular student, his attendance record is updated in the database and he/she is notified through LCD screen. In this system we are going to generate Microsoft excel attendance report on computer. This report will generate automatically after 15 days (depends upon user). This report will be sent to the respected HOD, teacher and student’s parents email Id.",
"title": ""
},
{
"docid": "cecc2950741d12045d9ba3ebad1fc69f",
"text": "Learning to read is extremely difficult for about 10% of children; they are affected by a neurodevelopmental disorder called dyslexia [1, 2]. The neurocognitive causes of dyslexia are still hotly debated [3-12]. Dyslexia remediation is far from being fully achieved [13], and the current treatments demand high levels of resources [1]. Here, we demonstrate that only 12 hr of playing action video games-not involving any direct phonological or orthographic training-drastically improve the reading abilities of children with dyslexia. We tested reading, phonological, and attentional skills in two matched groups of children with dyslexia before and after they played action or nonaction video games for nine sessions of 80 min per day. We found that only playing action video games improved children's reading speed, without any cost in accuracy, more so than 1 year of spontaneous reading development and more than or equal to highly demanding traditional reading treatments. Attentional skills also improved during action video game training. It has been demonstrated that action video games efficiently improve attention abilities [14, 15]; our results showed that this attention improvement can directly translate into better reading abilities, providing a new, fast, fun remediation of dyslexia that has theoretical relevance in unveiling the causal role of attention in reading acquisition.",
"title": ""
},
{
"docid": "5c4c265df2d24350340eb956191417ae",
"text": "When a remotely sited wind farm is connected to the utility power system through a distribution line, the overcurrent relay at the common coupling point needs a directional feature. This paper presents a method for estimating the direction of fault in such radial distribution systems using phase change in current. The difference in phase angle between the positive-sequence component of the current during fault and prefault conditions has been found to be a good indicator of the fault direction in a three-phase system. A rule base formed for the purpose decides the location of fault with respect to the relay in a distribution system. Such a strategy reduces the cost of the voltage sensor and/or connection for a protection scheme which is of relevance in emerging distributed-generation systems. The algorithm has been tested through simulation for different radial distribution systems.",
"title": ""
},
{
"docid": "b1d2ff76f8b4437a731ef5ccdb46429f",
"text": "Form, function and the relationship between the two are notions that have served a crucial role in design science. Within architectural design, key aspects of the anticipated function of buildings, or of spatial environments in general, are supposed to be determined by their structural form, i.e., their shape, layout, or connectivity. Whereas the philosophy of form and function is a well-researched topic, the practical relations and dependencies between form and function are only known implicitly by designers and architects. Specifically, the formal modelling of structural form and resulting artefactual function within design and design assistance systems remains elusive. In our work, we aim at making these definitions explicit by the ontological modelling of domain entities, their properties and related constraints. We thus have to particularly focus on formal interpretation of the terms “(structural) form” and “(artefactual) function”. We put these notions into practice by formalising ontological specifications accordingly by using modularly constructed ontologies for the architectural design domain. A key aspect of our modelling approach is the use of formal qualitative spatial calculi and conceptual requirements as a link between the structural form of a design and the differing functional capabilities that it affords or leads to. We demonstrate the manner in which our ontological modelling reflects notions of architectural form and function, and how it facilitates the conceptual modelling of requirement constraints for architectural design.",
"title": ""
},
{
"docid": "15b8b0f3682e2eb7c1b1a62be65d6327",
"text": "Data augmentation is widely used to train deep neural networks for image classification tasks. Simply flipping images can help learning by increasing the number of training images by a factor of two. However, data augmentation in natural language processing is much less studied. Here, we describe two methods for data augmentation for Visual Question Answering (VQA). The first uses existing semantic annotations to generate new questions. The second method is a generative approach using recurrent neural networks. Experiments show the proposed schemes improve performance of baseline and state-of-the-art VQA algorithms.",
"title": ""
},
{
"docid": "aeb578582a6c612e0640449e12000a21",
"text": "Intelligent Tutoring Systems (ITS) generate a wealth of finegrained student interaction data. Although it seems likely that teachers could benefit from access to advanced analytics generated from these data, ITSs do not typically come with dashboards designed for teachers’ needs. In this project, we follow a user-centered design approach to create a dashboard for teachers using ITSs.",
"title": ""
},
{
"docid": "e1d0c07f9886d3258f0c5de9dd372e17",
"text": "strategies and tools must be based on some theory of learning and cognition. Of course, crafting well-articulated views that clearly answer the major epistemological questions of human learning has exercised psychologists and educators for centuries. What is a mind? What does it mean to know something? How is our knowledge represented and manifested? Many educators prefer an eclectic approach, selecting “principles and techniques from the many theoretical perspectives in much the same way we might select international dishes from a smorgasbord, choosing those we like best and ending up with a meal which represents no nationality exclusively and a design technology based on no single theoretical base” (Bednar et al., 1995, p. 100). It is certainly the case that research within collaborative educational learning tools has drawn upon behavioral, cognitive information processing, humanistic, and sociocultural theory, among others, for inspiration and justification. Problems arise, however, when tools developed in the service of one epistemology, say cognitive information processing, are integrated within instructional systems designed to promote learning goals inconsistent with it. When concepts, strategies, and tools are abstracted from the theoretical viewpoint that spawned them, they are too often stripped of meaning and utility. In this chapter, we embed our discussion in learner-centered, constructivist, and sociocultural perspectives on collaborative technology, with a bias toward the third. The principles of these perspectives, in fact, provide the theoretical rationale for much of the research and ideas presented in this book. 2",
"title": ""
},
{
"docid": "42d755dbb843d9e5ba4bae4b492c2b8e",
"text": "Context: The management of software development productivity is a key issue in software organizations, where the major drivers are lower cost and shorter time-to-market. Agile methods, including Extreme Programming and Scrum, have evolved as “light” approaches that simplify the software development process, potentially leading to increased team productivity. However, little empirical research has examined which factors do have an impact on productivity and in what way, when using agile methods. Objective: Our objective is to provide a better understanding of the factors and mediators that impact agile team productivity. Method: We have conducted a multiple-case study for six months in three large Brazilian companies that have been using agile methods for over two years. We have focused on the main productivity factors perceived by team members through interviews, documentation from retrospectives, and non-participant observation. Results: We developed a novel conceptual framework, using thematic analysis to understand the possible mechanisms behind such productivity factors. Agile team management was found to be the most influential factor in achieving agile team productivity. At the intra-team level, the main productivity factors were team design (structure and work allocation) and member turnover. At the inter-team level, the main productivity factors were how well teams could be effectively coordinated by proper interfaces and other dependencies and avoiding delays in providing promised software to dependent teams. Conclusion: Teams should be aware of the influence and magnitude of turnover, which has been shown negative for agile team productivity. Team design choices remain an important factor impacting team productivity, even more pronounced on agile teams that rely on teamwork and people factors. The intra-team coordination processes must be adjusted to enable productive work by considering priorities and pace between teams. Finally, the revised conceptual framework for agile team productivity supports further tests through confirmatory studies.",
"title": ""
},
{
"docid": "e70425a0b9d14ff4223f3553de52c046",
"text": "CUDA is a new general-purpose C language interface to GPU developed by NVIDIA. It makes full use of parallel of GPU and has been widely used now. 3D model reconstruction is a traditional and common technique which has been widely used in engineering experiments, CAD and computer graphics. In this paper, we present an algorithm of CUDA-based Poisson surface reconstruction. Our algorithm makes full use of parallel of GPU and runs entirely on GPU and is ten times faster than previous CPU algorithm.",
"title": ""
},
{
"docid": "599d814fd3b3a758f3b2459b74aeb92c",
"text": "Relation classification is a crucial ingredient in numerous information extraction systems seeking to mine structured facts from text. We propose a novel convolutional neural network architecture for this task, relying on two levels of attention in order to better discern patterns in heterogeneous contexts. This architecture enables endto-end learning from task-specific labeled data, forgoing the need for external knowledge such as explicit dependency structures. Experiments show that our model outperforms previous state-of-the-art methods, including those relying on much richer forms of prior knowledge.",
"title": ""
},
{
"docid": "f9c8209fcecbbed99aa29761dffc8e25",
"text": "ImageNet is a large-scale database of object classes with millions of images. Unfortunately only a small fraction of them is manually annotated with bounding-boxes. This prevents useful developments, such as learning reliable object detectors for thousands of classes. In this paper we propose to automatically populate ImageNet with many more bounding-boxes, by leveraging existing manual annotations. The key idea is to localize objects of a target class for which annotations are not available, by transferring knowledge from related source classes with available annotations. We distinguish two kinds of source classes: ancestors and siblings. Each source provides knowledge about the plausible location, appearance and context of the target objects, which induces a probability distribution over windows in images of the target class. We learn to combine these distributions so as to maximize the location accuracy of the most probable window. Finally, we employ the combined distribution in a procedure to jointly localize objects in all images of the target class. Through experiments on 0.5 million images from 219 classes we show that our technique (i) annotates a wide range of classes with bounding-boxes; (ii) effectively exploits the hierarchical structure of ImageNet, since all sources and types of knowledge we propose contribute to the results; (iii) scales efficiently.",
"title": ""
},
{
"docid": "987024b9cca47797813f27da08d9a7c6",
"text": "Image segmentation plays a crucial role in many medical imaging applications by automating or facilitating the delineation of anatomical structures and other regions of interest. We present herein a critical appraisal of the current status of semi-automated and automated methods for the segmentation of anatomical medical images. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. The use of image segmentation in different imaging modalities is also described along with the difficulties encountered in each modality. We conclude with a discussion on the future of image segmentation methods in biomedical research.",
"title": ""
},
{
"docid": "79f691668b5e1d13cd1bfa70dfa33384",
"text": "Reported speech in the form of direct and indirect reported speech is an important indicator of evidentiality in traditional newspaper texts, but also increasingly in the new media that rely heavily on citation and quotation of previous postings, as for instance in blogs or newsgroups. This paper details the basic processing steps for reported speech analysis and reports on performance of an implementation in form of a GATE resource.",
"title": ""
},
{
"docid": "3f11c629670d986b8a266bae08e8a8d0",
"text": "SURVIVAL ANALYSIS APPROACH FOR EARLY PREDICTION OF STUDENT DROPOUT by SATTAR AMERI December 2015 Advisor: Dr. Chandan Reddy Major: Computer Science Degree: Master of Science Retention of students at colleges and universities has long been a concern for educators for many decades. The consequences of student attrition are significant for both students, academic staffs and the overall institution. Thus, increasing student retention is a long term goal of any academic institution. The most vulnerable students at all institutions of higher education are the freshman students, who are at the highest risk of dropping out at the beginning of their study. Consequently, the early identification of “at-risk” students is a crucial task that needs to be addressed precisely. In this thesis, we develop a framework for early prediction of student success using survival analysis approach. We propose time-dependent Cox (TD-Cox), which is based on the Cox proportional hazard regression model and also captures time-varying factors to address the challenge of predicting dropout students as well as the semester that the dropout will occur, to enable proactive interventions. This is critical in student retention problem because not only correctly classifying whether student is going to dropout is important but also when this is going to happen is crucial to investigate. We evaluate our method on real student data collected at Wayne State University. The results show that the proposed Cox-based framework can predict the student dropout and the semester of dropout with high accuracy and precision compared to the other alternative state-of-the-art methods.",
"title": ""
},
{
"docid": "4de2c6422d8357e6cb00cce21e703370",
"text": "OBJECTIVE\nFalls and fall-related injuries are leading problems in residential aged care facilities. The objective of this study was to provide descriptive data about falls in nursing homes.\n\n\nDESIGN/SETTING/PARTICIPANTS\nProspective recording of all falls over 1 year covering all residents from 528 nursing homes in Bavaria, Germany.\n\n\nMEASUREMENTS\nFalls were reported on a standardized form that included a facility identification code, date, time of the day, sex, age, degree of care need, location of the fall, and activity leading to the fall. Data detailing homes' bed capacities and occupancy levels were used to estimate total person-years under exposure and to calculate fall rates. All analyses were stratified by residents' degree of care need.\n\n\nRESULTS\nMore than 70,000 falls were recorded during 42,843 person-years. The fall rate was higher in men than in women (2.18 and 1.49 falls per person-year, respectively). Fall risk differed by degree of care need with lower fall risks both in the least and highest care categories. About 75% of all falls occurred in the residents' rooms or in the bathrooms and only 22% were reported within the common areas. Transfers and walking were responsible for 41% and 36% of all falls respectively. Fall risk varied during the day. Most falls were observed between 10 am and midday and between 2 pm and 8 pm.\n\n\nCONCLUSION\nThe differing fall risk patterns in specific subgroups may help to target preventive measures.",
"title": ""
},
{
"docid": "f85fb22836663d713074efcc9b1d3991",
"text": "Drawing annotations with 3D hand gestures in augmented reality are useful for creating visual and spatial references in the real world, especially when these gestures can be issued from a distance. Different techniques exist for highlighting physical objects with hand-drawn circle and arrow annotations from a distance, assuming an approximate 3D scene model (e.g., as provided by the Microsoft HoloLens). However, little is known about user preference and performance of such methods for annotating real-world 3D environments. In this paper, we compare different annotation methods using the HoloLens augmented reality development platform: Surface-Drawing and Air-Drawing, with either raw but smoothed or interpreted and beautified gesture input. For the Surface-Drawing method, users control a cursor that is projected onto the world model, allowing gesture input to occur directly on the surfaces of real-world objects. For the Air-Drawing method, gesture drawing occurs at the user's fingertip and is projected onto the world model on release. The methods have different characteristics regarding necessitated vergence switches and afforded cursor control. We performed an experiment in which users draw on two different real-world objects at different distances using the different methods. Results indicate that Surface-Drawing is more accurate than Air-Drawing and Beautified annotations are drawn faster than Non-Beautified; participants also preferred Surface-Drawing and Beautified.",
"title": ""
},
{
"docid": "9600f488c41b5574766067d32004400e",
"text": "A conversational agent, capable to have a ldquosense of humourrdquo is presented. The agent can both generate humorous sentences and recognize humoristic expressions introduced by the user during the dialogue. Humorist Bot makes use of well founded techniques of computational humor and it has been implemented using the ALICE framework embedded into an Yahoo! Messenger client. It includes also an avatar that changes the face expression according to humoristic content of the dialogue.",
"title": ""
},
{
"docid": "ef55470ed9fa3a1b792e347f5bddedbe",
"text": "Rechargeable lithium-ion batteries are promising candidates for building grid-level storage systems because of their high energy and power density, low discharge rate, and decreasing cost. A vital aspect in energy storage planning and operations is to accurately model the aging cost of battery cells, especially in irregular cycling operations. This paper proposes a semi-empirical lithium-ion battery degradation model that assesses battery cell life loss from operating profiles. We formulate the model by combining fundamental theories of battery degradation and our observations in battery aging test results. The model is adaptable to different types of lithium-ion batteries, and methods for tuning the model coefficients based on manufacturer's data are presented. A cycle-counting method is incorporated to identify stress cycles from irregular operations, allowing the degradation model to be applied to any battery energy storage (BES) applications. The usefulness of this model is demonstrated through an assessment of the degradation that a BES would incur by providing frequency control in the PJM regulation market.",
"title": ""
},
{
"docid": "48b88774957a6d30ae9d0a97b9643647",
"text": "The defect detection on manufactures is extremely important in the optimization of industrial processes; particularly, the visual inspection plays a fundamental role. The visual inspection is often carried out by a human expert. However, new technology features have made this inspection unreliable. For this reason, many researchers have been engaged to develop automatic analysis processes of manufactures and automatic optical inspections in the industrial production of printed circuit boards. Among the defects that could arise in this industrial process, those of the solder joints are very important, because they can lead to an incorrect functioning of the board; moreover, the amount of the solder paste can give some information on the quality of the industrial process. In this paper, a neural network-based automatic optical inspection system for the diagnosis of solder joint defects on printed circuit boards assembled in surface mounting technology is presented. The diagnosis is handled as a pattern recognition problem with a neural network approach. Five types of solder joints have been classified in respect to the amount of solder paste in order to perform the diagnosis with a high recognition rate and a detailed classification able to give information on the quality of the manufacturing process. The images of the boards under test are acquired and then preprocessed to extract the region of interest for the diagnosis. Three types of feature vectors are evaluated from each region of interest, which are the images of the solder joints under test, by exploiting the properties of the wavelet transform and the geometrical characteristics of the preprocessed images. The performances of three different classifiers which are a multilayer perceptron, a linear vector quantization, and a K-nearest neighbor classifier are compared. The n-fold cross-validation has been exploited to select the best architecture for the neural classifiers, while a number of experiments have been devoted to estimating the best value of K in the K-NN. The results have proved that the MLP network fed with the GW-features has the best recognition rate. This approach allows to carry out the diagnosis burden on image processing, feature extraction, and classification algorithms, reducing the cost and the complexity of the acquisition system. In fact, the experimental results suggest that the reason for the high recognition rate in the solder joint classification is due to the proper preprocessing steps followed as well as to the information contents of the features",
"title": ""
},
{
"docid": "e31ea6b8c4a5df049782b463abc602ea",
"text": "Nature plays a very important role to solve problems in a very effective and well-organized way. Few researchers are trying to create computational methods that can assist human to solve difficult problems. Nature inspired techniques like swarm intelligence, bio-inspired, physics/chemistry and many more have helped in solving difficult problems and also provide most favourable solution. Nature inspired techniques are wellmatched for soft computing application because parallel, dynamic and self organising behaviour. These algorithms motivated from the working group of social agents like ants, bees and insect. This paper is a complete survey of nature inspired techniques.",
"title": ""
}
] |
scidocsrr
|
d205f14331a9113a5eadee7947a3254e
|
Building Better Quality Predictors Using "$\epsilon$-Dominance"
|
[
{
"docid": "6e675e8a57574daf83ab78cea25688f5",
"text": "Collecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore âunsupervisedâ approaches to quality prediction that does not require labelled data. An alternate technique is to use âsupervisedâ approaches that learn models from project data labelled with, say, âdefectiveâ or ânot-defectiveâ. Most researchers use these supervised models since, it is argued, they can exploit more knowledge of the projects. \nAt FSEâ16, Yang et al. reported startling results where unsupervised defect predictors outperformed supervised predictors for effort-aware just-in-time defect prediction. If confirmed, these results would lead to a dramatic simplification of a seemingly complex task (data mining) that is widely explored in the software engineering literature. \nThis paper repeats and refutes those results as follows. (1) There is much variability in the efficacy of the Yang et al. predictors so even with their approach, some supervised data is required to prune weaker predictors away. (2) Their findings were grouped across N projects. When we repeat their analysis on a project-by-project basis, supervised predictors are seen to work better. \nEven though this paper rejects the specific conclusions of Yang et al., we still endorse their general goal. In our our experiments, supervised predictors did not perform outstandingly better than unsupervised ones for effort-aware just-in-time defect prediction. Hence, they may indeed be some combination of unsupervised learners to achieve comparable performance to supervised ones. We therefore encourage others to work in this promising area.",
"title": ""
},
{
"docid": "66bf2c7d6af4e2e7eec279888df23125",
"text": "Defect prediction on projects with limited historical data has attracted great interest from both researchers and practitioners. Cross-project defect prediction has been the main area of progress by reusing classifiers from other projects. However, existing approaches require some degree of homogeneity (e.g., a similar distribution of metric values) between the training projects and the target project. Satisfying the homogeneity requirement often requires significant effort (currently a very active area of research).\n An unsupervised classifier does not require any training data, therefore the heterogeneity challenge is no longer an issue. In this paper, we examine two types of unsupervised classifiers: a) distance-based classifiers (e.g., k-means); and b) connectivity-based classifiers. While distance-based unsupervised classifiers have been previously used in the defect prediction literature with disappointing performance, connectivity-based classifiers have never been explored before in our community.\n We compare the performance of unsupervised classifiers versus supervised classifiers using data from 26 projects from three publicly available datasets (i.e., AEEEM, NASA, and PROMISE). In the cross-project setting, our proposed connectivity-based classifier (via spectral clustering) ranks as one of the top classifiers among five widely-used supervised classifiers (i.e., random forest, naive Bayes, logistic regression, decision tree, and logistic model tree) and five unsupervised classifiers (i.e., k-means, partition around medoids, fuzzy C-means, neural-gas, and spectral clustering). In the within-project setting (i.e., models are built and applied on the same project), our spectral classifier ranks in the second tier, while only random forest ranks in the first tier. Hence, connectivity-based unsupervised classifiers offer a viable solution for cross and within project defect predictions.",
"title": ""
},
{
"docid": "752e6d6f34ffc638e9a0d984a62db184",
"text": "Defect prediction models are classifiers that are trained to identify defect-prone software modules. Such classifiers have configurable parameters that control their characteristics (e.g., the number of trees in a random forest classifier). Recent studies show that these classifiers may underperform due to the use of suboptimal default parameter settings. However, it is impractical to assess all of the possible settings in the parameter spaces. In this paper, we investigate the performance of defect prediction models where Caret --- an automated parameter optimization technique --- has been applied. Through a case study of 18 datasets from systems that span both proprietary and open source domains, we find that (1) Caret improves the AUC performance of defect prediction models by as much as 40 percentage points; (2) Caret-optimized classifiers are at least as stable as (with 35% of them being more stable than) classifiers that are trained using the default settings; and (3) Caret increases the likelihood of producing a top-performing classifier by as much as 83%. Hence, we conclude that parameter settings can indeed have a large impact on the performance of defect prediction models, suggesting that researchers should experiment with the parameters of the classification techniques. Since automated parameter optimization techniques like Caret yield substantially benefits in terms of performance improvement and stability, while incurring a manageable additional computational cost, they should be included in future defect prediction studies.",
"title": ""
},
{
"docid": "f1cbd60e1bd721e185bbbd12c133ad91",
"text": "Defect prediction models are a well-known technique for identifying defect-prone files or packages such that practitioners can allocate their quality assurance efforts (e.g., testing and code reviews). However, once the critical files or packages have been identified, developers still need to spend considerable time drilling down to the functions or even code snippets that should be reviewed or tested. This makes the approach too time consuming and impractical for large software systems. Instead, we consider defect prediction models that focus on identifying defect-prone (“risky”) software changes instead of files or packages. We refer to this type of quality assurance activity as “Just-In-Time Quality Assurance,” because developers can review and test these risky changes while they are still fresh in their minds (i.e., at check-in time). To build a change risk model, we use a wide range of factors based on the characteristics of a software change, such as the number of added lines, and developer experience. A large-scale study of six open source and five commercial projects from multiple domains shows that our models can predict whether or not a change will lead to a defect with an average accuracy of 68 percent and an average recall of 64 percent. Furthermore, when considering the effort needed to review changes, we find that using only 20 percent of the effort it would take to inspect all changes, we can identify 35 percent of all defect-inducing changes. Our findings indicate that “Just-In-Time Quality Assurance” may provide an effort-reducing way to focus on the most risky changes and thus reduce the costs of developing high-quality software.",
"title": ""
},
{
"docid": "2b010823e217e64e8e56b835cef40a1a",
"text": "Software defect prediction, which predicts defective code regions, can help developers find bugs and prioritize their testing efforts. To build accurate prediction models, previous studies focus on manually designing features that encode the characteristics of programs and exploring different machine learning algorithms. Existing traditional features often fail to capture the semantic differences of programs, and such a capability is needed for building accurate prediction models.\n To bridge the gap between programs' semantics and defect prediction features, this paper proposes to leverage a powerful representation-learning algorithm, deep learning, to learn semantic representation of programs automatically from source code. Specifically, we leverage Deep Belief Network (DBN) to automatically learn semantic features from token vectors extracted from programs' Abstract Syntax Trees (ASTs).\n Our evaluation on ten open source projects shows that our automatically learned semantic features significantly improve both within-project defect prediction (WPDP) and cross-project defect prediction (CPDP) compared to traditional features. Our semantic features improve WPDP on average by 14.7% in precision, 11.5% in recall, and 14.2% in F1. For CPDP, our semantic features based approach outperforms the state-of-the-art technique TCA+ with traditional features by 8.9% in F1.",
"title": ""
}
] |
[
{
"docid": "a6459555eb54297f623800bcdf10dcc6",
"text": "Phishing causes billions of dollars in damage every year and poses a serious threat to the Internet economy. Email is still the most commonly used medium to launch phishing attacks [1]. In this paper, we present a comprehensive natural language based scheme to detect phishing emails using features that are invariant and fundamentally characterize phishing. Our scheme utilizes all the information present in an email, namely, the header, the links and the text in the body. Although it is obvious that a phishing email is designed to elicit an action from the intended victim, none of the existing detection schemes use this fact to identify phishing emails. Our detection protocol is designed specifically to distinguish between “actionable” and “informational” emails. To this end, we incorporate natural language techniques in phishing detection. We also utilize contextual information, when available, to detect phishing: we study the problem of phishing detection within the contextual confines of the user’s email box and demonstrate that context plays an important role in detection. To the best of our knowledge, this is the first scheme that utilizes natural language techniques and contextual information to detect phishing. We show that our scheme outperforms existing phishing detection schemes. Finally, our protocol detects phishing at the email level rather than detecting masqueraded websites. This is crucial to prevent the victim from clicking any harmful links in the email. Our implementation called PhishNet-NLP, operates between a user’s mail transfer agent (MTA) and mail user agent (MUA) and processes each arriving email for phishing attacks even before reaching the",
"title": ""
},
{
"docid": "3b2376110b0e6949379697b7ba6730b5",
"text": "............................................................................................................................... i Acknowledgments............................................................................................................... ii Table of",
"title": ""
},
{
"docid": "c9748c67c2ab17cfead44fe3b486883d",
"text": "Entropy coding is an integral part of most data compression systems. Huffman coding (HC) and arithmetic coding (AC) are two of the most widely used coding methods. HC can process a large symbol alphabet at each step allowing for fast encoding and decoding. However, HC typically provides suboptimal data rates due to its inherent approximation of symbol probabilities to powers of 1 over 2. In contrast, AC uses nearly accurate symbol probabilities, hence generally providing better compression ratios. However, AC relies on relatively slow arithmetic operations making the implementation computationally demanding. In this paper we discuss asymmetric numeral systems (ANS) as a new approach to entropy coding. While maintaining theoretical connections with AC, the proposed ANS-based coding can be implemented with much less computational complexity. While AC operates on a state defined by two numbers specifying a range, an ANS-based coder operates on a state defined by a single natural number such that the x ∈ ℕ state contains ≈ log2(x) bits of information. This property allows to have the entire behavior for a large alphabet summarized in the form of a relatively small table (e.g. a few kilobytes for a 256 size alphabet). The proposed approach can be interpreted as an equivalent to adding fractional bits to a Huffman coder to combine the speed of HC and the accuracy offered by AC. Additionally, ANS can simultaneously encrypt a message encoded this way. Experimental results demonstrate effectiveness of the proposed entropy coder.",
"title": ""
},
{
"docid": "a986826041730d953dfbf9fbc1b115a6",
"text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"title": ""
},
{
"docid": "f83a16d393c78d6ba0e65a4659446e7e",
"text": "Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https://bitbucket.org/columbiadvmm/cdc.",
"title": ""
},
{
"docid": "f6dd10d4b400234a28b221d0527e71c0",
"text": "Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English– Romanian.",
"title": ""
},
{
"docid": "68420190120449343006879e23be8789",
"text": "Recent findings suggest that consolidation of emotional memories is influenced by menstrual phase in women. In contrast to other phases, in the mid-luteal phase when progesterone levels are elevated, cortisol levels are increased and correlated with emotional memory. This study examined the impact of progesterone on cortisol and memory consolidation of threatening stimuli under stressful conditions. Thirty women were recruited for the high progesterone group (in the mid-luteal phase) and 26 for the low progesterone group (in non-luteal phases of the menstrual cycle). Women were shown a series of 20 neutral or threatening images followed immediately by either a stressor (cold pressor task) or control condition. Participants returned two days later for a surprise free recall test of the images and salivary cortisol responses were monitored. High progesterone levels were associated with higher baseline and stress-evoked cortisol levels, and enhanced memory of negative images when stress was received. A positive correlation was found between stress-induced cortisol levels and memory recall of threatening images. These findings suggest that progesterone mediates cortisol responses to stress and subsequently predicts memory recall for emotionally arousing stimuli.",
"title": ""
},
{
"docid": "ec52b4c078c14a0d564577438846f178",
"text": "Millions of students across the United States cannot benefit fully from a traditional educational program because they have a disability that impairs their ability to participate in a typical classroom environment. For these students, computer-based technologies can play an especially important role. Not only can computer technology facilitate a broader range of educational activities to meet a variety of needs for students with mild learning disorders, but adaptive technology now exists than can enable even those students with severe disabilities to become active learners in the classroom alongside their peers who do not have disabilities. This article provides an overview of the role computer technology can play in promoting the education of children with special needs within the regular classroom. For example, use of computer technology for word processing, communication, research, and multimedia projects can help the three million students with specific learning and emotional disorders keep up with their nondisabled peers. Computer technology has also enhanced the development of sophisticated devices that can assist the two million students with more severe disabilities in overcoming a wide range of limitations that hinder classroom participation--from speech and hearing impairments to blindness and severe physical disabilities. However, many teachers are not adequately trained on how to use technology effectively in their classrooms, and the cost of the technology is a serious consideration for all schools. Thus, although computer technology has the potential to act as an equalizer by freeing many students from their disabilities, the barriers of inadequate training and cost must first be overcome before more widespread use can become a reality.",
"title": ""
},
{
"docid": "ef787cfc1b00c9d05ec9293ff802f172",
"text": "High Definition (HD) maps play an important role in modern traffic scenes. However, the development of HD maps coverage grows slowly because of the cost limitation. To efficiently model HD maps, we proposed a convolutional neural network with a novel prediction layer and a zoom module, called LineNet. It is designed for state-of-the-art lane detection in an unordered crowdsourced image dataset. And we introduced TTLane, a dataset for efficient lane detection in urban road modeling applications. Combining LineNet and TTLane, we proposed a pipeline to model HD maps with crowdsourced data for the first time. And the maps can be constructed precisely even with inaccurate crowdsourced data.",
"title": ""
},
{
"docid": "2f2291baa6c8a74744a16f27df7231d2",
"text": "Malicious programs, such as viruses and worms, are frequently related to previous programs through evolutionary relationships. Discovering those relationships and constructing a phylogeny model is expected to be helpful for analyzing new malware and for establishing a principled naming scheme. Matching permutations of code may help build better models in cases where malware evolution does not keep things in the same order. We describe methods for constructing phylogeny models that uses features called n-perms to match possibly permuted codes. An experiment was performed to compare the relative effectiveness of vector similarity measures using n-perms and n-grams when comparing permuted variants of programs. The similarity measures using n-perms maintained a greater separation between the similarity scores of permuted families of specimens versus unrelated specimens. A subsequent study using a tree generated through n-perms suggests that phylogeny models based on n-perms may help forensic analysts investigate new specimens, and assist in reconciling malware naming inconsistencies Škodlivé programy, jako viry a červy (malware), jsou zřídka psány narychlo, jen tak. Obvykle jsou výsledkem svých evolučních vztahů. Zjištěním těchto vztahů a tvorby v přesné fylogenezi se předpokládá užitečná pomoc v analýze nového malware a ve vytvoření zásad pojmenovacího schématu. Porovnávání permutací kódu uvnitř malware mů že nabídnout výhody pro fylogenní generování, protože evoluční kroky implementované autory malware nemohou uchovat posloupnosti ve sdíleném kódu. Popisujeme rodinu fylogenních generátorů, které provádějí clustering pomocí PQ stromově založených extrakčních vlastností. Byl vykonán experiment v němž výstup stromu z těchto generátorů byl vyhodnocen vzhledem k fylogenezím generovaným pomocí vážených n-gramů. Výsledky ukazují výhody přístupu založeného na permutacích ve fylogenním generování malware. Les codes malveillants, tels que les virus et les vers, sont rarement écrits de zéro; en conséquence, il existe des relations de nature évolutive entre ces différents codes. Etablir ces relations et construire une phylogénie précise permet d’espérer une meilleure capacité d’analyse de nouveaux codes malveillants et de disposer d’une méthode de fait de nommage de ces codes. La concordance de permutations de code avec des parties de codes malveillants sont susceptibles d’être très intéressante dans l’établissement d’une phylogénie, dans la mesure où les étapes évolutives réalisées par les auteurs de codes malveillants ne conservent généralement pas l’ordre des instructions présentes dans le code commun. Nous décrivons ici une famille de générateurs phylogénétiques réalisant des regroupements à l’aide de caractéristiques extraites d’arbres PQ. Une expérience a été réalisée, dans laquelle l’arbre produit par ces générateurs est évalué d’une part en le comparant avec les classificiations de références utilisées par les antivirus par scannage, et d’autre part en le comparant aux phylogénies produites à l’aide de polygrammes de taille n (n-grammes), pondérés. Les résultats démontrent l’intérêt de l’approche utilisant les permutations dans la génération phylogénétique des codes malveillants. Haitalliset ohjelmat, kuten tietokonevirukset ja -madot, kirjoitetaan harvoin alusta alkaen. Tämän seurauksena niistä on löydettävissä evoluution kaltaista samankaltaisuutta. Samankaltaisuuksien löytämisellä sekä rakentamalla tarkka evoluutioon perustuva malli voidaan helpottaa uusien haitallisten ohjelmien analysointia sekä toteuttaa nimeämiskäytäntöjä. Permutaatioiden etsiminen koodista saattaa antaa etuja evoluutiomallin muodostamiseen, koska haitallisten ohjelmien kirjoittajien evolutionääriset askeleet eivät välttämättä säilytä jaksoittaisuutta ohjelmakoodissa. Kuvaamme joukon evoluutiomallin muodostajia, jotka toteuttavat klusterionnin käyttämällä PQ-puuhun perustuvia ominaisuuksia. Teimme myös kokeen, jossa puun tulosjoukkoa verrattiin virustentorjuntaohjelman muodostamaan viitejoukkoon sekä evoluutiomalleihin, jotka oli muodostettu painotetuilla n-grammeilla. Tulokset viittaavat siihen, että permutaatioon perustuvaa lähestymistapaa voidaan menestyksekkäästi käyttää evoluutiomallien muodostamineen. Maliziöse Programme, wie z.B. Viren und Würmer, werden nur in den seltensten Fällen komplett neu geschrieben; als Ergebnis können zwischen verschiedenen maliziösen Codes Abhängigkeiten gefunden werden. Im Hinblick auf Klassifizierung und wissenschaftlichen Aufarbeitung neuer maliziöser Codes kann es sehr hilfreich erweisen, Abhängigkeiten zu bestehenden maliziösen Codes darzulegen und somit einen Stammbaum zu erstellen. In dem Artikel wird u.a. auf moderne Ansätze innerhalb der Staumbaumgenerierung anhand ausgewählter Win32 Viren eingegangen. I programmi maligni, quali virus e worm, sono raramente scritti da zero; questo significa che vi sono delle relazioni di evoluzione tra di loro. Scoprire queste relazioni e costruire una filogenia accurata puo’aiutare sia nell’analisi di nuovi programmi di questo tipo, sia per stabilire una nomenclatura avente una base solida. Cercare permutazioni di codice tra vari programmi puo’ dare un vantaggio per la generazione delle filogenie, dal momento che i passaggi evolutivi implementati dagli autori possono non aver preservato la sequenzialita’ del codice originario. In questo articolo descriviamo una famiglia di generatori di filogenie che effettuano clustering usando feature basate su alberi PQ. In un esperimento l’albero di output dei generatori viene confrontato con una classificazione di rifetimento ottenuta da un programma anti-virus, e con delle filogenie generate usando n-grammi pesati. I risultati indicano i risultati positivi dell’approccio basato su permutazioni nella generazione delle filogenie del malware. ",
"title": ""
},
{
"docid": "63b2bc943743d5b8ef9220fd672df84f",
"text": "In multiagent systems, we often have a set of agents each of which have a preference ordering over a set of items and one would like to know these preference orderings for various tasks, for example, data analysis, preference aggregation, voting etc. However, we often have a large number of items which makes it impractical to ask the agents for their complete preference ordering. In such scenarios, we usually elicit these agents’ preferences by asking (a hopefully small number of) comparison queries — asking an agent to compare two items. Prior works on preference elicitation focus on unrestricted domain and the domain of single peaked preferences and show that the preferences in single peaked domain can be elicited by much less number of queries compared to unrestricted domain. We extend this line of research and study preference elicitation for single peaked preferences on trees which is a strict superset of the domain of single peaked preferences. We show that the query complexity crucially depends on the number of leaves, the path cover number, and the distance from path of the underlying single peaked tree, whereas the other natural parameters like maximum degree, diameter, pathwidth do not play any direct role in determining query complexity. We then investigate the query complexity for finding a weak Condorcet winner for preferences single peaked on a tree and show that this task has much less query complexity than preference elicitation. Here again we observe that the number of leaves in the underlying single peaked tree and the path cover number of the tree influence the query complexity of the problem.",
"title": ""
},
{
"docid": "86d58f4196ceb48e29cb143e6a157c22",
"text": "In this paper, we challenge a form of paragraph-to-question generation task. We propose a question generation system which can generate a set of comprehensive questions from a body of text. Besides the tree kernel functions to assess the grammatically of the generated questions, our goal is to rank them by using community-based question answering systems to calculate the importance of the generated questions. The main assumption behind our work is that each body of text is related to a topic of interest and it has a comprehensive information about the topic.",
"title": ""
},
{
"docid": "75b0a7b0fa0320a3666fb147471dd45f",
"text": "Maximum power densities by air-driven microbial fuel cells (MFCs) are considerably influenced by cathode performance. We show here that application of successive polytetrafluoroethylene (PTFE) layers (DLs), on a carbon/PTFE base layer, to the air-side of the cathode in a single chamber MFC significantly improved coulombic efficiencies (CEs), maximum power densities, and reduced water loss (through the cathode). Electrochemical tests using carbon cloth electrodes coated with different numbers of DLs indicated an optimum increase in the cathode potential of 117 mV with four-DLs, compared to a <10 mV increase due to the carbon base layer alone. In MFC tests, four-DLs was also found to be the optimum number of coatings, resulting in a 171% increase in the CE (from 19.1% to 32%), a 42% increase in the maximum power density (from 538 to 766 mW m ), and measurable water loss was prevented. The increase in CE due is believed to result from the increased power output and the increased operation time (due to a reduction in aerobic degradation of substrate sustained by oxygen diffusion through the cathode). 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "615e43e2dc7c12c38c87a4a6649407c0",
"text": "BACKGROUND\nThe management of chronic pain is a complex challenge worldwide. Cannabis-based medicines (CBMs) have proven to be efficient in reducing chronic pain, although the topic remains highly controversial in this field.\n\n\nOBJECTIVES\nThis study's aim is to conduct a conclusive review and meta-analysis, which incorporates all randomized controlled trials (RCTs) in order to update clinicians' and researchers' knowledge regarding the efficacy and adverse events (AEs) of CBMs for chronic and postoperative pain treatment.\n\n\nSTUDY DESIGN\nA systematic review and meta-analysis.\n\n\nMETHODS\nAn electronic search was conducted using Medline/Pubmed and Google Scholar with the use of Medical Subject Heading (MeSH) terms on all literature published up to July 2015. A follow-up manual search was conducted and included a complete cross-check of the relevant studies. The included studies were RCTs which compared the analgesic effects of CBMs to placebo. Hedges's g scores were calculated for each of the studies. A study quality assessment was performed utilizing the Jadad scale. A meta-analysis was performed utilizing random-effects models and heterogeneity between studies was statistically computed using I² statistic and tau² test.\n\n\nRESULTS\nThe results of 43 RCTs (a total of 2,437 patients) were included in this review, of which 24 RCTs (a total of 1,334 patients) were eligible for meta-analysis. This analysis showed limited evidence showing more pain reduction in chronic pain -0.61 (-0.78 to -0.43, P < 0.0001), especially by inhalation -0.93 (-1.51 to -0.35, P = 0.001) compared to placebo. Moreover, even though this review consisted of some RCTs that showed a clinically significant improvement with a decrease of pain scores of 2 points or more, 30% or 50% or more, the majority of the studies did not show an effect. Consequently, although the primary analysis showed that the results were favorable to CBMs over placebo, the clinical significance of these findings is uncertain. The most prominent AEs were related to the central nervous and the gastrointestinal (GI) systems.\n\n\nLIMITATIONS\nPublication limitation could have been present due to the inclusion of English-only published studies. Additionally, the included studies were extremely heterogeneous. Only 7 studies reported on the patients' history of prior consumption of CBMs. Furthermore, since cannabinoids are surrounded by considerable controversy in the media and society, cannabinoids have marked effects, so that inadequate blinding of the placebo could constitute an important source of limitation in these types of studies.\n\n\nCONCLUSIONS\nThe current systematic review suggests that CBMs might be effective for chronic pain treatment, based on limited evidence, primarily for neuropathic pain (NP) patients. Additionally, GI AEs occurred more frequently when CBMs were administered via oral/oromucosal routes than by inhalation.Key words: Cannabis, CBMs, chronic pain, postoperative pain, review, meta-analysis.",
"title": ""
},
{
"docid": "d1aa525575e33c587d86e89566c21a49",
"text": "This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.",
"title": ""
},
{
"docid": "7f47253095756d9640e8286a08ce3b74",
"text": "A speaker’s intentions can be represented by domain actions (domainindependent speech act and domain-dependent concept sequence pairs). Therefore, it is essential that domain actions be determined when implementing dialogue systems because a dialogue system should determine users’ intentions from their utterances and should create counterpart intentions to the users’ intentions. In this paper, a neural network model is proposed for classifying a user’s domain actions and planning a system’s domain actions. An integrated neural network model is proposed for simultaneously determining user and system domain actions using the same framework. The proposed model performed better than previous non-integrated models in an experiment using a goal-oriented dialogue corpus. This result shows that the proposed integration method contributes to improving domain action determination performance. Keywords—Domain Action, Speech Act, Concept Sequence, Neural Network",
"title": ""
},
{
"docid": "af3e8e26ec6f56a8cd40e731894f5993",
"text": "Probiotic bacteria are sold mainly in fermented foods, and dairy products play a predominant role as carriers of probiotics. These foods are well suited to promoting the positive health image of probiotics for several reasons: 1) fermented foods, and dairy products in particular, already have a positive health image; 2) consumers are familiar with the fact that fermented foods contain living microorganisms (bacteria); and 3) probiotics used as starter organisms combine the positive images of fermentation and probiotic cultures. When probiotics are added to fermented foods, several factors must be considered that may influence the ability of the probiotics to survive in the product and become active when entering the consumer's gastrointestinal tract. These factors include 1) the physiologic state of the probiotic organisms added (whether the cells are from the logarithmic or the stationary growth phase), 2) the physical conditions of product storage (eg, temperature), 3) the chemical composition of the product to which the probiotics are added (eg, acidity, available carbohydrate content, nitrogen sources, mineral content, water activity, and oxygen content), and 4) possible interactions of the probiotics with the starter cultures (eg, bacteriocin production, antagonism, and synergism). The interactions of probiotics with either the food matrix or the starter culture may be even more intensive when probiotics are used as a component of the starter culture. Some of these aspects are discussed in this article, with an emphasis on dairy products such as milk, yogurt, and cheese.",
"title": ""
},
{
"docid": "8eb0f822b4e8288a6b78abf0bf3aecbb",
"text": "Cloud computing enables access to the widespread services and resources in cloud datacenters for mitigating resource limitations in low-potential client devices. Computational cloud is an attractive platform for computational offloading due to the attributes of scalability and availability of resources. Therefore, mobile cloud computing (MCC) leverages the application processing services of computational clouds for enabling computational-intensive and ubiquitous mobile applications on smart mobile devices (SMDs). Computational offloading frameworks focus on offloading intensive mobile applications at different granularity levels which involve resource-intensive mechanism of application profiling and partitioning at runtime. As a result, the energy consumption cost (ECC) and turnaround time of the application is increased. This paper proposes an active service migration (ASM) framework for computational offloading to cloud datacenters, which employs lightweight procedure for the deployment of runtime distributed platform. The proposed framework employs coarse granularity level and simple developmental and deployment procedures for computational offloading in MCC. ASM is evaluated by benchmarking prototype application on the Android devices in the real MCC environment. It is found that the turnaround time of the application reduces up to 45 % and ECC of the application reduces up to 33 % in ASM-based computational offloading as compared to traditional offloading techniques which shows the lightweight nature of the proposed framework for computational offloading.",
"title": ""
},
{
"docid": "e731c10f822aa74b37263bee92a73be2",
"text": "Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems.",
"title": ""
},
{
"docid": "815f6ee1be0244b3815903d97742bf5f",
"text": "To evaluate the short- and long-term results after a modified Chevrel technique for midline incisional hernia repair, regarding surgical technique, hospital stay, wound complications, recurrence rate, and postoperative quality of life. These results will be compared to the literature derived reference values regarding the original and modified Chevrel techniques. In this large retrospective, single surgeon, single centre cohort all modified Chevrel hernia repairs between 2000 and 2012 were identified. Results were obtained by reviewing patients’ medical charts. Postoperative quality of life was measured using the Carolina Comfort Scale. A multi-database literature search was conducted to compare the results of our series to the literature based reference values. One hundred and fifty-five patients (84 male, 71 female) were included. Eighty patients (52%) had a large incisional hernia (width ≥ 10 cm) according the definition of the European Hernia Society. Fourteen patients (9%) underwent a concomitant procedure. Median length-of-stay was 5 days. Within 30 days postoperative 36 patients (23.2%) had 39 postoperative complications of which 30 were mild (CDC I–II), and nine severe (CDC III–IV). Thirty-one surgical site occurrences were observed in thirty patients (19.4%) of which the majority were seroma (16 patients 10.3%). There was no hernia-related mortality during follow-up. Recurrence rate was 1.8% after a median follow-up of 52 months (12–128 months). Postoperative quality of life was rated excellent. The modified Chevrel technique for midline ventral hernias results in a moderate complication rate, low recurrence rate and high rated postoperative quality of life.",
"title": ""
}
] |
scidocsrr
|
676750cc6699250834bbba06c106c5c6
|
Cyber-Physical-Social Based Security Architecture for Future Internet of Things
|
[
{
"docid": "de8e9537d6b50467d014451dcaae6c0e",
"text": "With increased global interconnectivity, reliance on e-commerce, network services, and Internet communication, computer security has become a necessity. Organizations must protect their systems from intrusion and computer-virus attacks. Such protection must detect anomalous patterns by exploiting known signatures while monitoring normal computer programs and network usage for abnormalities. Current antivirus and network intrusion detection (ID) solutions can become overwhelmed by the burden of capturing and classifying new viral stains and intrusion patterns. To overcome this problem, a self-adaptive distributed agent-based defense immune system based on biological strategies is developed within a hierarchical layered architecture. A prototype interactive system is designed, implemented in Java, and tested. The results validate the use of a distributed-agent biological-system approach toward the computer-security problems of virus elimination and ID.",
"title": ""
},
{
"docid": "e33dd9c497488747f93cfcc1aa6fee36",
"text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.",
"title": ""
}
] |
[
{
"docid": "bc5b77c532c384281af64633fcf697a3",
"text": "The purpose of this study was to investigate the effects of a 12-week resistance-training program on muscle strength and mass in older adults. Thirty-three inactive participants (60-74 years old) were assigned to 1 of 3 groups: high-resistance training (HT), moderate-resistance training (MT), and control. After the training period, both HT and MT significantly increased 1-RM body strength, the peak torque of knee extensors and flexors, and the midthigh cross-sectional area of the total muscle. In addition, both HT and MT significantly decreased the abdominal circumference. HT was more effective in increasing 1-RM strength, muscle mass, and peak knee-flexor torque than was MT. These data suggest that muscle strength and mass can be improved in the elderly with both high- and moderate-intensity resistance training, but high-resistance training can lead to greater strength gains and hypertrophy than can moderate-resistance training.",
"title": ""
},
{
"docid": "fd4bd9edcaff84867b6e667401aa3124",
"text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378",
"title": ""
},
{
"docid": "5c819727ba80894e72531a62e402f0c4",
"text": "omega-3 fatty acids, alpha-tocopherol, ascorbic acid, beta-carotene and glutathione determined in leaves of purslane (Portulaca oleracea), grown in both a controlled growth chamber and in the wild, were compared in composition to spinach. Leaves from both samples of purslane contained higher amounts of alpha-linolenic acid (18:3w3) than did leaves of spinach. Chamber-grown purslane contained the highest amount of 18:3w3. Samples from the two kinds of purslane contained higher leaves of alpha-tocopherol, ascorbic acid and glutathione than did spinach. Chamber-grown purslane was richer in all three and the amount of alpha-tocopherol was seven times higher than that found in spinach, whereas spinach was slightly higher in beta-carotene. One hundred grams of fresh purslane leaves (one serving) contain about 300-400 mg of 18:3w3; 12.2 mg of alpha-tocopherol; 26.6 mg of ascorbic acid; 1.9 mg of beta-carotene; and 14.8 mg of glutathione. We confirm that purslane is a nutritious food rich in omega-3 fatty acids and antioxidants.",
"title": ""
},
{
"docid": "ede12c734b2fb65b427b3d47e1f3c3d8",
"text": "Battery management systems in hybrid-electric-vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state-of-charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose methods, based on extended Kalman filtering (EKF), that are able to accomplish these goals for a lithium ion polymer battery pack. We expect that they will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. This third paper concludes the series by presenting five additional applications where either an EKF or results from EKF may be used in typical BMS algorithms: initializing state estimates after the vehicle has been idle for some time; estimating state-of-charge with dynamic error bounds on the estimate; estimating pack available dis/charge power; tracking changing pack parameters (including power fade and capacity fade) as the pack ages, and therefore providing a quantitative estimate of state-of-health; and determining which cells must be equalized. Results from pack tests are presented. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e4e26cc61b326f8d60dc3f32909d340c",
"text": "We propose two secure protocols namely private equality test (PET) for single comparison and private batch equality test (PriBET) for batch comparisons of l-bit integers. We ensure the security of these secure protocols using somewhat homomorphic encryption (SwHE) based on ring learning with errors (ring-LWE) problem in the semi-honest model. In the PET protocol, we take two private integers input and produce the output denoting their equality or non-equality. Here the PriBET protocol is an extension of the PET protocol. So in the PriBET protocol, we take single private integer and another set of private integers as inputs and produce the output denoting whether single integer equals at least one integer in the set of integers or not. To serve this purpose, we also propose a new packing method for doing the batch equality test using few homomorphic multiplications of depth one. Here we have done our experiments at the 140-bit security level. For the lattice dimension 2048, our experiments show that the PET protocol is capable of doing any equality test of 8-bit to 2048-bit that require at most 107 milliseconds. Moreover, the PriBET protocol is capable of doing about 600 (resp., 300) equality comparisons per second for 32-bit (resp., 64-bit) integers. In addition, our experiments also show that the PriBET protocol can do more computations within the same time if the data size is smaller like 8-bit or 16-bit.",
"title": ""
},
{
"docid": "1cc4048067cc93c2f1e836c77c2e06dc",
"text": "Recent advances in microscope automation provide new opportunities for high-throughput cell biology, such as image-based screening. High-complex image analysis tasks often make the implementation of static and predefined processing rules a cumbersome effort. Machine-learning methods, instead, seek to use intrinsic data structure, as well as the expert annotations of biologists to infer models that can be used to solve versatile data analysis tasks. Here, we explain how machine-learning methods work and what needs to be considered for their successful application in cell biology. We outline how microscopy images can be converted into a data representation suitable for machine learning, and then introduce various state-of-the-art machine-learning algorithms, highlighting recent applications in image-based screening. Our Commentary aims to provide the biologist with a guide to the application of machine learning to microscopy assays and we therefore include extensive discussion on how to optimize experimental workflow as well as the data analysis pipeline.",
"title": ""
},
{
"docid": "440436a887f73c599452dc57c689dc9d",
"text": "This paper will explore the process of desalination by reverse osmosis (RO) and the benefits that it can contribute to society. RO may offer a sustainable solution to the water crisis, a global problem that is not going away without severe interference and innovation. This paper will go into depth on the processes involved with RO and how contaminants are removed from sea-water. Additionally, the use of significant pressures to force water through the semipermeable membranes, which only allow water to pass through them, will be investigated. Throughout the paper, the topics of environmental and economic sustainability will be covered. Subsequently, the two primary methods of desalination, RO and multi-stage flash distillation (MSF), will be compared. It will become clear that RO is a better method of desalination when compared to MSF. This paper will study examples of RO in action, including; the Carlsbad Plant, the Sorek Plant, and applications beyond the potable water industry. It will be shown that The Claude \"Bud\" Lewis Carlsbad Desalination Plant (Carlsbad), located in San Diego, California is a vital resource in the water economy of the area. The impact of the Sorek Plant, located in Tel Aviv, Israel will also be explained. Both plants produce millions of gallons of fresh, drinkable water and are vital resources for the people that live there.",
"title": ""
},
{
"docid": "10496d5427035670d89f72a64b68047f",
"text": "A challenge for human-computer interaction researchers and user interf ace designers is to construct information technologies that support creativity. This ambitious goal can be attained by building on an adequate understanding of creative processes. This article offers a four-phase framework for creativity that might assist designers in providing effective tools for users: (1)Collect: learn from provious works stored in libraries, the Web, etc.; (2) Relate: consult with peers and mentors at early, middle, and late stages, (3)Create: explore, compose, evaluate possible solutions; and (4) Donate: disseminate the results and contribute to the libraries. Within this integrated framework, this article proposes eight activities that require human-computer interaction research and advanced user interface design. A scenario about an architect illustrates the process of creative work within such an environment.",
"title": ""
},
{
"docid": "c19b63a2c109c098c22877bcba8690ae",
"text": "A monolithic current-mode pulse width modulation (PWM) step-down dc-dc converter with 96.7% peak efficiency and advanced control and protection circuits is presented in this paper. The high efficiency is achieved by \"dynamic partial shutdown strategy\" which enhances circuit speed with less power consumption. Automatic PWM and \"pulse frequency modulation\" switching boosts conversion efficiency during light load operation. The modified current sensing circuit and slope compensation circuit simplify the current-mode control circuit and enhance the response speed. A simple high-speed over-current protection circuit is proposed with the modified current sensing circuit. The new on-chip soft-start circuit prevents the power on inrush current without additional off-chip components. The dc-dc converter has been fabricated with a 0.6 mum CMOS process and measured 1.35 mm2 with the controller measured 0.27 mm2. Experimental results show that the novel on-chip soft-start circuit with longer than 1.5 ms soft-start time suppresses the power-on inrush current. This converter can operate at 1.1 MHz with supply voltage from 2.2 to 6.0 V. Measured power efficiency is 88.5-96.7% for 0.9 to 800 mA output current and over 85.5% for 1000 mA output current.",
"title": ""
},
{
"docid": "cc5f814338606b92c92aa6caf2f4a3f5",
"text": "The purpose of this study was to report the outcome of infants with antenatal hydronephrosis. Between May 1999 and June 2006, all patients diagnosed with isolated fetal renal pelvic dilatation (RPD) were prospectively followed. The events of interest were: presence of uropathy, need for surgical intervention, RPD resolution, urinary tract infection (UTI), and hypertension. RPD was classified as mild (5–9.9 mm), moderate (10–14.9 mm) or severe (≥15 mm). A total of 192 patients was included in the analysis; 114 were assigned to the group of non-significant findings (59.4%) and 78 to the group of significant uropathy (40.6%). Of 89 patients with mild dilatation, 16 (18%) presented uropathy. Median follow-up time was 24 months. Twenty-seven patients (15%) required surgical intervention. During follow-up, UTI occurred in 27 (14%) children. Of 89 patients with mild dilatation, seven (7.8%) presented UTI during follow-up. Renal function, blood pressure, and somatic growth were within normal range at last visit. The majority of patients with mild fetal RPD have no significant findings during infancy. Nevertheless, our prospective study has shown that 18% of these patients presented uropathy and 7.8% had UTI during a medium-term follow-up time. Our findings suggested that, in contrast to patients with moderate/severe RPD, infants with mild RPD do not require invasive diagnostic procedures but need strict clinical surveillance for UTI and progression of RPD.",
"title": ""
},
{
"docid": "2f3bb54596bba8cd7a073ef91964842c",
"text": "BACKGROUND AND PURPOSE\nRecent meta-analyses have suggested similar wound infection rates when using single- or multiple-dose antibiotic prophylaxis in the operative management of closed long bone fractures. In order to assist clinicians in choosing the optimal prophylaxis strategy, we performed a cost-effectiveness analysis comparing single- and multiple-dose prophylaxis.\n\n\nMETHODS\nA cost-effectiveness analysis comparing the two prophylactic strategies was performed using time horizons of 60 days and 1 year. Infection probabilities, costs, and quality-adjusted life days (QALD) for each strategy were estimated from the literature. All costs were reported in 2007 US dollars. A base case analysis was performed for the surgical treatment of a closed ankle fracture. Sensitivity analysis was performed for all variables, including probabilistic sensitivity analysis using Monte Carlo simulation.\n\n\nRESULTS\nSingle-dose prophylaxis results in lower cost and a similar amount of quality-adjusted life days gained. The single-dose strategy had an average cost of $2,576 for an average gain of 272 QALD. Multiple doses had an average cost of $2,596 for 272 QALD gained. These results are sensitive to the incidence of surgical site infection and deep wound infection for the single-dose treatment arm. Probabilistic sensitivity analysis using all model variables also demonstrated preference for the single-dose strategy.\n\n\nINTERPRETATION\nAssuming similar infection rates between the prophylactic groups, our results suggest that single-dose prophylaxis is slightly more cost-effective than multiple-dose regimens for the treatment of closed fractures. Extensive sensitivity analysis demonstrates these results to be stable using published meta-analysis infection rates.",
"title": ""
},
{
"docid": "3aa58539c69d6706bc0a9ca0256cdf80",
"text": "BACKGROUND\nAcne vulgaris is a prevalent skin disorder impairing both physical and psychosocial health. This study was designed to investigate the effectiveness of photodynamic therapy (PDT) combined with minocycline in moderate to severe facial acne and influence on quality of life (QOL).\n\n\nMETHODS\nNinety-five patients with moderate to severe facial acne (Investigator Global Assessment [IGA] score 3-4) were randomly treated with PDT and minocycline (n = 48) or minocycline alone (n = 47). All patients took minocycline hydrochloride 100 mg/d for 4 weeks, whereas patients in the minocycline plus PDT group also received 4 times PDT treatment 1 week apart. IGA score, lesion counts, Dermatology Life Quality Index (DLQI), and safety evaluation were performed before treatment and at 2, 4, 6, and 8 weeks after enrolment.\n\n\nRESULTS\nThere were no statistically significant differences in characteristics between 2 treatment groups at baseline. Minocycline plus PDT treatment led to a greater mean percentage reduction from baseline in lesion counts versus minocycline alone at 8 weeks for both inflammatory (-74.4% vs -53.3%; P < .001) and noninflammatory lesions (-61.7% vs -42.4%; P < .001). More patients treated with minocycline plus PDT achieved IGA score <2 at study end (week 8: 30/48 vs 20/47; P < .05). Patients treated with minocycline plus PDT got significant lower DLQI at 8 weeks (4.4 vs 6.3; P < .001). Adverse events were mild and manageable.\n\n\nCONCLUSIONS\nCompared with minocycline alone, the combination of PDT with minocycline significantly improved clinical efficacy and QOL in moderate to severe facial acne patients.",
"title": ""
},
{
"docid": "bf4a991dbb32ec1091a535750637dbd7",
"text": "As cutting-edge experiments display ever more extreme forms of non-classical behavior, the prevailing view on the interpretation of quantum mechanics appears to be gradually changing. A (highly unscientific) poll taken at the 1997 UMBC quantum mechanics workshop gave the once alldominant Copenhagen interpretation less than half of the votes. The Many Worlds interpretation (MWI) scored second, comfortably ahead of the Consistent Histories and Bohm interpretations. It is argued that since all the above-mentioned approaches to nonrelativistic quantum mechanics give identical cookbook prescriptions for how to calculate things in practice, practical-minded experimentalists, who have traditionally adopted the “shut-up-and-calculate interpretation”, typically show little interest in whether cozy classical concepts are in fact real in some untestable metaphysical sense or merely the way we subjectively perceive a mathematically simpler world where the Schrödinger equation describes everything — and that they are therefore becoming less bothered by a profusion of worlds than by a profusion of words. Common objections to the MWI are discussed. It is argued that when environment-induced decoherence is taken into account, the experimental predictions of the MWI are identical to those of the Copenhagen interpretation except for an experiment involving a Byzantine form of “quantum suicide”. This makes the choice between them purely a matter of taste, roughly equivalent to whether one believes mathematical language or human language to be more fundamental.",
"title": ""
},
{
"docid": "f274062a188fb717b8645e4d2352072a",
"text": "CPU-FPGA heterogeneous acceleration platforms have shown great potential for continued performance and energy efficiency improvement for modern data centers, and have captured great attention from both academia and industry. However, it is nontrivial for users to choose the right platform among various PCIe and QPI based CPU-FPGA platforms from different vendors. This paper aims to find out what microarchitectural characteristics affect the performance, and how. We conduct our quantitative comparison and in-depth analysis on two representative platforms: QPI-based Intel-Altera HARP with coherent shared memory, and PCIe-based Alpha Data board with private device memory. We provide multiple insights for both application developers and platform designers.",
"title": ""
},
{
"docid": "c9c29c091c9851920315c4d4b38b4c9f",
"text": "BACKGROUND\nThe presence of six or more café au lait (CAL) spots is a criterion for the diagnosis of neurofibromatosis type 1 (NF-1). Children with multiple CAL spots are often referred to dermatologists for NF-1 screening. The objective of this case series is to characterize a subset of fair-complected children with red or blond hair and multiple feathery CAL spots who did not meet the criteria for NF-1 at the time of their last evaluation.\n\n\nMETHODS\nWe conducted a chart review of eight patients seen in our pediatric dermatology clinic who were previously identified as having multiple CAL spots and no other signs or symptoms of NF-1.\n\n\nRESULTS\nWe describe eight patients ages 2 to 9 years old with multiple, irregular CAL spots with feathery borders and no other signs or symptoms of NF-1. Most of these patients had red or blond hair and were fair complected. All patients were evaluated in our pediatric dermatology clinic, some with a geneticist. The number of CAL spots per patient ranged from 5 to 15 (mean 9.4, median 9).\n\n\nCONCLUSION\nA subset of children, many with fair complexions and red or blond hair, has an increased number of feathery CAL spots and appears unlikely to develop NF-1, although genetic testing was not conducted. It is important to recognize the benign nature of CAL spots in these patients so that appropriate screening and follow-up recommendations may be made.",
"title": ""
},
{
"docid": "fc07af4d49f7b359e484381a0a88aff7",
"text": "In this paper, we develop the idea of a universal anytime intelligence test. The meaning of the terms “universal” and “anytime” is manifold here: the test should be able to measure the intelligence of any biological or artificial system that exists at this time or in the future. It should also be able to evaluate both inept and brilliant systems (any intelligence level) as well as very slow to very fast systems (any time scale). Also, the test may be interrupted at any time, producing an approximation to the intelligence score, in such a way that the more time is left for the test, the better the assessment will be. In order to do this, our test proposal is based on previous works on the measurement of machine intelligence based on Kolmogorov Complexity and universal distributions, which were developed in the late 1990s (C-tests and compression-enhanced Turing tests). It is also based on the more recent idea of measuring intelligence through dynamic/interactive tests held against a universal distribution of environments. We discuss some of these tests and highlight their limitations since we want to construct a test that is both general and practical. Consequently, we introduce many new ideas that develop early “compression tests” and the more recent definition of “universal intelligence” in order to design new “universal intelligence tests”, where a feasible implementation has been a design requirement. One of these tests is the “anytime intelligence test”, which adapts to the examinee’s level of intelligence in order to obtain an intelligence score within a limited time.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1ec62f70be9d006b7e1295ef8d9cb1e3",
"text": "The aim of this research is to explore social media and its benefits especially from business-to-business innovation and related customer interface perspective, and to create a more comprehensive picture of the possibilities of social media for the business-to-business sector. Business-to-business context was chosen because it is in many ways a very different environment for social media than business-to-consumer context, and is currently very little academically studied. A systematic literature review on B2B use of social media and achieved benefits in the inn ovation con text was performed to answer the questions above and achieve the research goals. The study clearly demonstrates that not merely B2C's, as commonly believed, but also B2B's can benefit from the use of social media in a variety of ways. Concerning the broader classes of innovation --related benefits, the reported benefits of social media use referred to increased customer focus and understanding, increased level of customer service, and decreased time-to-market. The study contributes to the existing social media --related literature, because there were no found earlier comprehensive academic studies on the use of social media in the innovation process in the context of B2B customer interface.",
"title": ""
},
{
"docid": "97c162261666f145da6e81d2aa9a8343",
"text": "Shape optimization is a growing field of interest in many areas of academic research, marine design, and manufacturing. As part of the CREATE Ships Hydromechanics Product, an effort is underway to develop a computational tool set and process framework that can aid the ship designer in making informed decisions regarding the influence of the planned hull shape on its hydrodynamic characteristics, even at the earliest stages where decisions can have significant cost implications. The major goal of this effort is to utilize the increasing experience gained in using these methods to assess shape optimization techniques and how they might impact design for current and future naval ships. Additionally, this effort is aimed at establishing an optimization framework within the bounds of a collaborative design environment that will result in improved performance and better understanding of preliminary ship designs at an early stage. The initial effort demonstrated here is aimed at ship resistance, and examples are shown for full ship and localized bow dome shaping related to the Joint High Speed Sealift (JHSS) hull concept. Introduction Any ship design inherently involves optimization, as competing requirements and design parameters force the design to evolve, and as designers strive to deliver the most effective and efficient platform possible within the constraints of time, budget, and performance requirements. A significant number of applications of computational fluid dynamics (CFD) tools to hydrodynamic optimization, mostly for reducing calm-water drag and wave patterns, demonstrate a growing interest in optimization. In addition, more recent ship design programs within the US Navy illustrate some fundamental changes in mission and performance requirements, and future ship designs may be radically different from current ships in the fleet. One difficulty with designing such new concepts is the lack of experience from which to draw from when performing design studies; thus, optimization techniques may be particularly useful. These issues point to a need for greater fidelity, robustness, and ease of use in the tools used in early stage ship design. The Computational Research and Engineering Acquisition Tools and Environments (CREATE) program attempts to address this in its plan to develop and deploy sets of computational engineering design and analysis tools. It is expected that advances in computers will allow for highly accurate design and analyses studies that can be carried out throughout the design process. In order to evaluate candidate designs and explore the design space more thoroughly shape optimization is an important component of the CREATE Ships Hydromechanics Product. The current program development plan includes fast parameterized codes to bound the design space and more accurate Reynolds-Averaged Navier-Stokes (RANS) codes to better define the geometry and performance of the specified hull forms. The potential for hydrodynamic shape optimization has been demonstrated for a variety of different hull forms, including multi-hulls, in related efforts (see e.g., Wilson et al, 2009, Stern et al, Report Documentation Page Form Approved",
"title": ""
},
{
"docid": "7a8a98b91680cbc63594cd898c3052c8",
"text": "Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.",
"title": ""
}
] |
scidocsrr
|
01b5af49bd41891b0e9c7c78fbcc468b
|
Collaborative Networks of Cognitive Systems
|
[
{
"docid": "8bc04818536d2a8deff01b0ea0419036",
"text": "Research in IT must address the design tasks faced by practitioners. Real problems must be properly conceptualized and represented, appropriate techniques for their solution must be constructed, and solutions must be implemented and evaluated using appropriate criteria. If significant progress is to be made, IT research must also develop an understanding of how and why IT systems work or do not work. Such an understanding must tie together natural laws governing IT systems with natural laws governing the environments in which they operate. This paper presents a two dimensional framework for research in information technology. The first dimension is based on broad types of design and natural science research activities: build, evaluate, theorize, and justify. The second dimension is based on broad types of outputs produced by design research: representational constructs, models, methods, and instantiations. We argue that both design science and natural science activities are needed to insure that IT research is both relevant and effective.",
"title": ""
},
{
"docid": "86b12f890edf6c6561536a947f338feb",
"text": "Looking for qualified reading resources? We have process mining discovery conformance and enhancement of business processes to check out, not only review, yet also download them or even read online. Discover this great publication writtern by now, simply right here, yeah just right here. Obtain the data in the sorts of txt, zip, kindle, word, ppt, pdf, as well as rar. Once again, never ever miss out on to read online as well as download this publication in our site here. Click the link. Our goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles. We have got a considerable collection of totally free of expense Book for people from every single stroll of life. We have got tried our finest to gather a sizable library of preferred cost-free as well as paid files.",
"title": ""
}
] |
[
{
"docid": "b2a7c0a96f29a554ecdba2d56778b7c7",
"text": "Existing video streaming algorithms use various estimation approaches to infer the inherently variable bandwidth in cellular networks, which often leads to reduced quality of experience (QoE). We ask the question: \"If accurate bandwidth prediction were possible in a cellular network, how much can we improve video QoE?\". Assuming we know the bandwidth for the entire video session, we show that existing streaming algorithms only achieve between 69%-86% of optimal quality. Since such knowledge may be impractical, we study algorithms that know the available bandwidth for a few seconds into the future. We observe that prediction alone is not sufficient and can in fact lead to degraded QoE. However, when combined with rate stabilization functions, prediction outperforms existing algorithms and reduces the gap with optimal to 4%. Our results lead us to believe that cellular operators and content providers can tremendously improve video QoE by predicting available bandwidth and sharing it through APIs.",
"title": ""
},
{
"docid": "b922460e2a1d8b6dff6cc1c8c8c459ed",
"text": "This paper presents a new dynamic latched comparator which shows lower input-referred latch offset voltage and higher load drivability than the conventional dynamic latched comparators. With two additional inverters inserted between the input- and output-stage of the conventional double-tail dynamic comparator, the gain preceding the regenerative latch stage was improved and the complementary version of the output-latch stage, which has bigger output drive current capability at the same area, was implemented. As a result, the circuit shows up to 25% less input-referred latch offset voltage and 44% less sensitivity of the delay versus the input voltage difference (delay/log(ΔVin)), which is about 17.2ps/decade, than the conventional double-tail latched comparator at approximately the same area and power consumption.",
"title": ""
},
{
"docid": "7cef2ade99ffacfe1df5108665870988",
"text": "We describe improvements of the currently most popular method for prediction of classically secreted proteins, SignalP. SignalP consists of two different predictors based on neural network and hidden Markov model algorithms, where both components have been updated. Motivated by the idea that the cleavage site position and the amino acid composition of the signal peptide are correlated, new features have been included as input to the neural network. This addition, combined with a thorough error-correction of a new data set, have improved the performance of the predictor significantly over SignalP version 2. In version 3, correctness of the cleavage site predictions has increased notably for all three organism groups, eukaryotes, Gram-negative and Gram-positive bacteria. The accuracy of cleavage site prediction has increased in the range 6-17% over the previous version, whereas the signal peptide discrimination improvement is mainly due to the elimination of false-positive predictions, as well as the introduction of a new discrimination score for the neural network. The new method has been benchmarked against other available methods. Predictions can be made at the publicly available web server",
"title": ""
},
{
"docid": "6a19410817766b052a2054b2cb3efe42",
"text": "Artificially intelligent agents equipped with strategic skills that can negotiate during their interactions with other natural or artificial agents are still underdeveloped. This paper describes a successful application of Deep Reinforcement Learning (DRL) for training intelligent agents with strategic conversational skills, in a situated dialogue setting. Previous studies have modelled the behaviour of strategic agents using supervised learning and traditional reinforcement learning techniques, the latter using tabular representations or learning with linear function approximation. In this study, we apply DRL with a high-dimensional state space to the strategic board game of Settlers of Catan—where players can offer resources in exchange for others and they can also reply to offers made by other players. Our experimental results report that the DRL-based learnt policies significantly outperformed several baselines including random, rule-based, and supervised-based behaviours. The DRL-based policy has a 53% win rate versus 3 automated players (‘bots’), whereas a supervised player trained on a dialogue corpus in this setting achieved only 27%, versus the same 3 bots. This result supports the claim that DRL is a promising framework for training dialogue systems, and strategic agents with negotiation abilities.",
"title": ""
},
{
"docid": "d14812771115b4736c6d46aecadb2d8a",
"text": "This article reports on a helical spring-like piezoresistive graphene strain sensor formed within a microfluidic channel. The helical spring has a tubular hollow structure and is made of a thin graphene layer coated on the inner wall of the channel using an in situ microfluidic casting method. The helical shape allows the sensor to flexibly respond to both tensile and compressive strains in a wide dynamic detection range from 24 compressive strain to 20 tensile strain. Fabrication of the sensor involves embedding a helical thin metal wire with a plastic wrap into a precursor solution of an elastomeric polymer, forming a helical microfluidic channel by removing the wire from cured elastomer, followed by microfluidic casting of a graphene thin layer directly inside the helical channel. The wide dynamic range, in conjunction with mechanical flexibility and stretchability of the sensor, will enable practical wearable strain sensor applications where large strains are often involved.",
"title": ""
},
{
"docid": "90aceb010cead2fbdc37781c686bf522",
"text": "The present article examines the relationship between age and dominance in bilingual populations. Age in bilingualism is understood as the point in devel10 opment at which second language (L2) acquisition begins and as the chronological age of users of two languages. Age of acquisition (AoA) is a factor in determining which of a bilingual’s two languages is dominant and to what degree, and it, along with age of first language (L1) attrition, may be associated with shifts in dominance from the L1 to the L2. In turn, dominance and chron15 ological age, independently and in interaction with lexical frequency, predict performance on naming tasks. The article also considers the relevance of criticalperiod accounts of the relationships of AoA and age of L1 attrition to L2 dominance, and of usage-based and cognitive-aging accounts of the roles of age and dominance in naming.",
"title": ""
},
{
"docid": "0f85ce6afd09646ee1b5242a4d6122d1",
"text": "Environmental concern has resulted in a renewed interest in bio-based materials. Among them, plant fibers are perceived as an environmentally friendly substitute to glass fibers for the reinforcement of composites, particularly in automotive engineering. Due to their wide availability, low cost, low density, high-specific mechanical properties, and eco-friendly image, they are increasingly being employed as reinforcements in polymer matrix composites. Indeed, their complex microstructure as a composite material makes plant fiber a really interesting and challenging subject to study. Research subjects about such fibers are abundant because there are always some issues to prevent their use at large scale (poor adhesion, variability, low thermal resistance, hydrophilic behavior). The choice of natural fibers rather than glass fibers as filler yields a change of the final properties of the composite. One of the most relevant differences between the two kinds of fiber is their response to humidity. Actually, glass fibers are considered as hydrophobic whereas plant fibers have a pronounced hydrophilic behavior. Composite materials are often submitted to variable climatic conditions during their lifetime, including unsteady hygroscopic conditions. However, in humid conditions, strong hydrophilic behavior of such reinforcing fibers leads to high level of moisture absorption in wet environments. This results in the structural modification of the fibers and an evolution of their mechanical properties together with the composites in which they are fitted in. Thereby, the understanding of these moisture absorption mechanisms as well as the influence of water on the final properties of these fibers and their composites is of great interest to get a better control of such new biomaterials. This is the topic of this review paper.",
"title": ""
},
{
"docid": "ceb270c07d26caec5bc20e7117690f9f",
"text": "Pesticides including insecticides and miticides are primarily used to regulate arthropod (insect and mite) pest populations in agricultural and horticultural crop production systems. However, continual reliance on pesticides may eventually result in a number of potential ecological problems including resistance, secondary pest outbreaks, and/or target pest resurgence [1,2]. Therefore, implementation of alternative management strategies is justified in order to preserve existing pesticides and produce crops with minimal damage from arthropod pests. One option that has gained interest by producers is integrating pesticides with biological control agents or natural enemies including parasitoids and predators [3]. This is often referred to as ‘compatibility,’ which is the ability to integrate or combine natural enemies with pesticides so as to regulate arthropod pest populations without directly or indirectly affecting the life history parameters or population dynamics of natural enemies [2,4]. This may also refer to pesticides being effective against targeted arthropod pests but relatively non-harmful to natural enemies [5,6].",
"title": ""
},
{
"docid": "94105f6e64a27b18f911d788145385b6",
"text": "Low socioeconomic status (SES) is generally associated with high psychiatric morbidity, more disability, and poorer access to health care. Among psychiatric disorders, depression exhibits a more controversial association with SES. The authors carried out a meta-analysis to evaluate the magnitude, shape, and modifiers of such an association. The search found 51 prevalence studies, five incidence studies, and four persistence studies meeting the criteria. A random effects model was applied to the odds ratio of the lowest SES group compared with the highest, and meta-regression was used to assess the dose-response relation and the influence of covariates. Results indicated that low-SES individuals had higher odds of being depressed (odds ratio = 1.81, p < 0.001), but the odds of a new episode (odds ratio = 1.24, p = 0.004) were lower than the odds of persisting depression (odds ratio = 2.06, p < 0.001). A dose-response relation was observed for education and income. Socioeconomic inequality in depression is heterogeneous and varies according to the way psychiatric disorder is measured, to the definition and measurement of SES, and to contextual features such as region and time. Nonetheless, the authors found compelling evidence for socioeconomic inequality in depression. Strategies for tackling inequality in depression are needed, especially in relation to the course of the disorder.",
"title": ""
},
{
"docid": "d8752c40782d8189d454682d1d30738e",
"text": "This article reviews the empirical literature on personality, leadership, and organizational effectiveness to make 3 major points. First, leadership is a real and vastly consequential phenomenon, perhaps the single most important issue in the human sciences. Second, leadership is about the performance of teams, groups, and organizations. Good leadership promotes effective team and group performance, which in turn enhances the well-being of the incumbents; bad leadership degrades the quality of life for everyone associated with it. Third, personality predicts leadership—who we are is how we lead—and this information can be used to select future leaders or improve the performance of current incumbents.",
"title": ""
},
{
"docid": "6df45b11d623e8080cc7163632dde893",
"text": "Network bandwidth and hardware technology are developing rapidly, resulting in the vigorous development of the Internet. A new concept, cloud computing, uses low-power hosts to achieve high reliability. The cloud computing, an Internet-based development in which dynamicallyscalable and often virtualized resources are provided as a service over the Internet has become a significant issues. In this paper, we aim to pinpoint the challenges and issues of Cloud computing. We first discuss two related computing paradigms Service-Oriented Computing and Grid computing, and their relationships with Cloud computing. We then identify several challenges from the Cloud computing adoption perspective. Last, we will highlight the Cloud interoperability issue that deserves substantial further research and development. __________________________________________________*****_________________________________________________",
"title": ""
},
{
"docid": "ad967dca901ccdd3f33b83da29e9f18b",
"text": "Energy consumption limits battery life in mobile devices and increases costs for servers and data centers. Approximate computing addresses energy concerns by allowing applications to trade accuracy for decreased energy consumption. Approximation frameworks can guarantee accuracy or performance and generally reduce energy usage; however, they provide no energy guarantees. Such guarantees would be beneficial for users who have a fixed energy budget and want to maximize application accuracy within that budget. We address this need by presenting JouleGuard: a runtime control system that coordinates approximate applications with system resource usage to provide control theoretic formal guarantees of energy consumption, while maximizing accuracy. We implement JouleGuard and test it on three different platforms (a mobile, tablet, and server) with eight different approximate applications created from two different frameworks. We find that JouleGuard respects energy budgets, provides near optimal accuracy, adapts to phases in application workload, and provides better outcomes than application approximation or system resource adaptation alone. JouleGuard is general with respect to the applications and systems it controls, making it a suitable runtime for a number of approximate computing frameworks.",
"title": ""
},
{
"docid": "7da0a472f0a682618eccbfd4229ca14f",
"text": "A Search Join is a join operation which extends a user-provided table with additional attributes based on a large corpus of heterogeneous data originating from the Web or corporate intranets. Search Joins are useful within a wide range of application scenarios: Imagine you are an analyst having a local table describing companies and you want to extend this table with attributes containing the headquarters, turnover, and revenue of each company. Or imagine you are a film enthusiast and want to extend a table describing films with attributes like director, genre, and release date of each film. This article presents the Mannheim Search Join Engine which automatically performs such table extension operations based on a large corpus of Web data. Given a local table, the Mannheim Search Join Engine searches the corpus for additional data describing the entities contained in the input table. The discovered data is then joined with the local table and is consolidated using schema matching and data fusion techniques. As result, the user is presented with an extended table and given the opportunity to examine the provenance of the added data. We evaluate the Mannheim Search Join Engine using heterogeneous data originating from over one million different websites. The data corpus consists of HTML tables, as well as Linked Data and Microdata annotations which are converted into tabular form. Our experiments show that the Mannheim Search Join Engine achieves a coverage close to 100% and a precision of around 90% for the tasks of extending tables describing cities, companies, countries, drugs, books, films, and songs.",
"title": ""
},
{
"docid": "442680dcfbe4651eb5434e6b6703d25e",
"text": "The mammalian genome is transcribed into large numbers of long noncoding RNAs (lncRNAs), but the definition of functional lncRNA groups has proven difficult, partly due to their low sequence conservation and lack of identified shared properties. Here we consider promoter conservation and positional conservation as indicators of functional commonality. We identify 665 conserved lncRNA promoters in mouse and human that are preserved in genomic position relative to orthologous coding genes. These positionally conserved lncRNA genes are primarily associated with developmental transcription factor loci with which they are coexpressed in a tissue-specific manner. Over half of positionally conserved RNAs in this set are linked to chromatin organization structures, overlapping binding sites for the CTCF chromatin organiser and located at chromatin loop anchor points and borders of topologically associating domains (TADs). We define these RNAs as topological anchor point RNAs (tapRNAs). Characterization of these noncoding RNAs and their associated coding genes shows that they are functionally connected: they regulate each other’s expression and influence the metastatic phenotype of cancer cells in vitro in a similar fashion. Furthermore, we find that tapRNAs contain conserved sequence domains that are enriched in motifs for zinc finger domain-containing RNA-binding proteins and transcription factors, whose binding sites are found mutated in cancers. This work leverages positional conservation to identify lncRNAs with potential importance in genome organization, development and disease. The evidence that many developmental transcription factors are physically and functionally connected to lncRNAs represents an exciting stepping-stone to further our understanding of genome regulation.",
"title": ""
},
{
"docid": "7a3441773c79b9fde64ebcf8103616a1",
"text": "SIMD parallelism has become an increasingly important mechanism for delivering performance in modern CPUs, due its power efficiency and relatively low cost in die area compared to other forms of parallelism. Unfortunately, languages and compilers for CPUs have not kept up with the hardware's capabilities. Existing CPU parallel programming models focus primarily on multi-core parallelism, neglecting the substantial computational capabilities that are available in CPU SIMD vector units. GPU-oriented languages like OpenCL support SIMD but lack capabilities needed to achieve maximum efficiency on CPUs and suffer from GPU-driven constraints that impair ease of use on CPUs. We have developed a compiler, the Intel R® SPMD Program Compiler (ispc), that delivers very high performance on CPUs thanks to effective use of both multiple processor cores and SIMD vector units. ispc draws from GPU programming languages, which have shown that for many applications the easiest way to program SIMD units is to use a single-program, multiple-data (SPMD) model, with each instance of the program mapped to one SIMD lane. We discuss language features that make ispc easy to adopt and use productively with existing software systems and show that ispc delivers up to 35x speedups on a 4-core system and up to 240× speedups on a 40-core system for complex workloads (compared to serial C++ code).",
"title": ""
},
{
"docid": "892bad91cfae82dfe3d06d2f93edfe8b",
"text": "Fine-grained image recognition is a challenging computer vision problem, due to the small inter-class variations caused by highly similar subordinate categories, and the large intra-class variations in poses, scales and rotations. In this paper, we prove that selecting useful deep descriptors contributes well to fine-grained image recognition. Specifically, a novel Mask-CNN model without the fully connected layers is proposed. Based on the part annotations, the proposed model consists of a fully convolutional network to both locate the discriminative parts ( e.g. , head and torso), and more importantly generate weighted object/part masks for selecting useful and meaningful convolutional descriptors. After that, a three-stream Mask-CNN model is built for aggregating the selected objectand part-level descriptors simultaneously. Thanks to discarding the parameter redundant fully connected layers, our Mask-CNN has a small feature dimensionality and efficient inference speed by comparing with other fine-grained approaches. Furthermore, we obtain a new state-of-the-art accuracy on two challenging fine-grained bird species categorization datasets, which validates the effectiveness of both the descriptor selection scheme and the proposed",
"title": ""
},
{
"docid": "acc700d965586f5ea65bdcb67af38fca",
"text": "OBJECTIVE\nAttention deficit hyperactivity disorder (ADHD) symptoms are associated with the deficit in executive functions. Playing Go involves many aspect of cognitive function and we hypothesized that it would be effective for children with ADHD.\n\n\nMETHODS\nSeventeen drug naïve children with ADHD and seventeen age and sex matched comparison subjects were participated. Participants played Go under the instructor's education for 2 hours/day, 5 days/week. Before and at the end of Go period, clinical symptoms, cognitive functions, and brain EEG were assessed with Dupaul's ADHD scale (ARS), Child depression inventory (CDI), digit span, the Children's Color Trails Test (CCTT), and 8-channel QEEG system (LXE3208, Laxtha Inc., Daejeon, Korea).\n\n\nRESULTS\nThere were significant improvements of ARS total score (z=2.93, p<0.01) and inattentive score (z=2.94, p<0.01) in children with ADHD. However, there was no significant change in hyperactivity score (z=1.33, p=0.18). There were improvement of digit total score (z=2.60, p<0.01; z=2.06, p=0.03), digit forward score (z=2.21, p=0.02; z=2.02, p=0.04) in both ADHD and healthy comparisons. In addition, ADHD children showed decreased time of CCTT-2 (z=2.21, p=0.03). The change of theta/beta right of prefrontal cortex during 16 weeks was greater in children with ADHD than in healthy comparisons (F=4.45, p=0.04). The change of right theta/beta in prefrontal cortex has a positive correlation with ARS-inattention score in children with ADHD (r=0.44, p=0.03).\n\n\nCONCLUSION\nWe suggest that playing Go would be effective for children with ADHD by activating hypoarousal prefrontal function and enhancing executive function.",
"title": ""
},
{
"docid": "486417082d921eba9320172a349ee28f",
"text": "Circulating tumor cells (CTCs) are a popular topic in cancer research because they can be obtained by liquid biopsy, a minimally invasive procedure with more sample accessibility than tissue biopsy, to monitor a patient's condition. Over the past decades, CTC research has covered a wide variety of topics such as enumeration, profiling, and correlation between CTC number and patient overall survival. It is important to isolate and enrich CTCs before performing CTC analysis because CTCs in the blood stream are very rare (0⁻10 CTCs/mL of blood). Among the various approaches to separating CTCs, here, we review the research trends in the isolation and analysis of CTCs using microfluidics. Microfluidics provides many attractive advantages for CTC studies such as continuous sample processing to reduce target cell loss and easy integration of various functions into a chip, making \"do-everything-on-a-chip\" possible. However, tumor cells obtained from different sites within a tumor exhibit heterogenetic features. Thus, heterogeneous CTC profiling should be conducted at a single-cell level after isolation to guide the optimal therapeutic path. We describe the studies on single-CTC analysis based on microfluidic devices. Additionally, as a critical concern in CTC studies, we explain the use of CTCs in cancer research, despite their rarity and heterogeneity, compared with other currently emerging circulating biomarkers, including exosomes and cell-free DNA (cfDNA). Finally, the commercialization of products for CTC separation and analysis is discussed.",
"title": ""
},
{
"docid": "106df67fa368439db4f5684b4a9f7bd9",
"text": "Issues in cybersecurity; understanding the potential risks associated with hackers/crackers Alan D. Smith William T. Rupp Article information: To cite this document: Alan D. Smith William T. Rupp, (2002),\"Issues in cybersecurity; understanding the potential risks associated with hackers/ crackers\", Information Management & Computer Security, Vol. 10 Iss 4 pp. 178 183 Permanent link to this document: http://dx.doi.org/10.1108/09685220210436976",
"title": ""
},
{
"docid": "065c12155991b38d36ec1e71cff60ce4",
"text": "The purpose of this chapter is to introduce, analyze, and compare the models of wheeled mobile robots (WMR) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. According to this discussion it is shown that, whatever the number and the types of the wheels, all WMR belong to only five generic classes. Different types of models are derived and compared: the posture model versus the configuration model, the kinematic model versus the dynamic model. The structural properties of these models are discussed and compared. These models as well as their properties constitute the background necessary for model-based control design. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robots and articulated robots realizations are described in more detail.",
"title": ""
}
] |
scidocsrr
|
788d40c0b87990e754b1d4a9c98f72ff
|
HoME: a Household Multimodal Environment
|
[
{
"docid": "46c8336f395d04d49369d406f41b0602",
"text": "Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http://www.scenenn.net.",
"title": ""
},
{
"docid": "8e6debae3b3d3394e87e671a14f8819e",
"text": "Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.",
"title": ""
}
] |
[
{
"docid": "4709a4e1165abb5d0018b74495218fc7",
"text": "Network monitoring guides network operators in understanding the current behavior of a network. Therefore, accurate and efficient monitoring is vital to ensure that the network operates according to the intended behavior and then to troubleshoot any deviations. However, the current practice of network-monitoring largely depends on manual operations, and thus enterprises spend a significant portion of their budgets on the workforce that monitor their networks. We analyze present network-monitoring technologies, identify open problems, and suggest future directions. In particular, our findings are based on two different analyses. The first analysis assesses how well present technologies integrate with the entire cycle of network-management operations: design, deployment, and monitoring. Network operators first design network configurations, given a set of requirements, then they deploy the new design, and finally they verify it by continuously monitoring the network’s behavior. One of our observations is that the efficiency of this cycle can be greatly improved by automated deployment of pre-designed configurations, in response to changes in monitored network behavior. Our second analysis focuses on network-monitoring technologies and group issues in these technologies into five categories. Such grouping leads to the identification of major problem groups in network monitoring, e.g., efficient management of increasing amounts of measurements for storage, analysis, and presentation. We argue that continuous effort is needed in improving network-monitoring since the presented problems will become even more serious in the future, as networks grow in size and carry more data. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e235a9eb5df7c5cf1487ae03cc6bc4d3",
"text": "The objective of the proposed scheme is to extract the maximum power at different wind turbine speed. In order to achieve this, MPPT controller is implemented on the rectifier side for the extraction of maximum power. On the inverter side normal closed loop PWM control is carried out. MPPT controller is implemented using fuzzy logic control technique. The fuzzy controller's role here is to track the speed reference of the generator. By doing so and keeping the generator speed at an optimal reference value, maximum power can be attained. This procedure is repeated for various wind turbine speeds, When the wind speed increases the real power generated by the PMSG based WECS increases with the aid of MPPT controller.",
"title": ""
},
{
"docid": "a49ea9c9f03aa2d926faa49f4df63b7a",
"text": "Deep stacked RNNs are usually hard to train. Recent studies have shown that shortcut connections across different RNN layers bring substantially faster convergence. However, shortcuts increase the computational complexity of the recurrent computations. To reduce the complexity, we propose the shortcut block, which is a refinement of the shortcut LSTM blocks. Our approach is to replace the self-connected parts (ct) with shortcuts (hl−2 t ) in the internal states. We present extensive empirical experiments showing that this design performs better than the original shortcuts. We evaluate our method on CCG supertagging task, obtaining a 8% relatively improvement over current state-of-the-art results.",
"title": ""
},
{
"docid": "3fb6cec95fcaa0f8b6c6e4f649591b35",
"text": "This paper presents the performance of DSP, image and 3D applications on recent general-purpose microprocessors using streaming SIMD ISA extensions (integer and oating point). The 9 benchmarks benchmark we use for this evaluation have been optimized for DLP and caches use with SIMD extensions and data prefetch. The result of these cumulated optimizations is a speedup that ranges from 1.9 to 7.1. All the benchmarks were originaly computation bound and 7 becomes memory bandwidth bound with the addition of SIMD and data prefetch. Quadrupling the memory bandwidth has no eeect on original kernels but improves the performance of SIMD kernels by 15-55%.",
"title": ""
},
{
"docid": "842202ed67b71c91630fcb63c4445e38",
"text": "Yaumatei Dermatology Clinic, 12/F Yaumatei Specialist Clinic (New Extension), 143 Battery Street, Yaumatei, Kowloon, Hong Kong A 46-year-old Chinese man presented with one year history of itchy verrucous lesions over penis and scrotum. Skin biopsy confirmed epidermolytic acanthoma. Epidermolytic acanthoma is a rare benign tumour. Before making such a diagnosis, exclusion of other diseases, especially genital warts and bowenoid papulosis is necessary. Treatment of multiple epidermolytic acanthoma remains unsatisfactory.",
"title": ""
},
{
"docid": "b15815b79af412b59b1780538f7dc4ce",
"text": "Aim—To recognise automatically the main components of the fundus on digital colour images. Methods—The main features of a fundus retinal image were defined as the optic disc, fovea, and blood vessels. Methods are described for their automatic recognition and location. 112 retinal images were preprocessed via adaptive, local, contrast enhancement. The optic discs were located by identifying the area with the highest variation in intensity of adjacent pixels. Blood vessels were identified by means of a multilayer perceptron neural net, for which the inputs were derived from a principal component analysis (PCA) of the image and edge detection of the first component of PCA. The foveas were identified using matching correlation together with characteristics typical of a fovea—for example, darkest area in the neighbourhood of the optic disc. The main components of the image were identified by an experienced ophthalmologist for comparison with computerised methods. Results—The sensitivity and specificity of the recognition of each retinal main component was as follows: 99.1% and 99.1% for the optic disc; 83.3% and 91.0% for blood vessels; 80.4% and 99.1% for the fovea. Conclusions—In this study the optic disc, blood vessels, and fovea were accurately detected. The identification of the normal components of the retinal image will aid the future detection of diseases in these regions. In diabetic retinopathy, for example, an image could be analysed for retinopathy with reference to sight threatening complications such as disc neovascularisation, vascular changes, or foveal exudation. (Br J Ophthalmol 1999;83:902–910) The patterns of disease that aVect the fundus of the eye are varied and usually require identification by a trained human observer such as a clinical ophthalmologist. The employment of digital fundus imaging in ophthalmology provides us with digitised data that could be exploited for computerised detection of disease. Indeed, many investigators use computerised image analysis of the eye, under the direction of a human observer. The management of certain diseases would be greatly facilitated if a fully automated method was employed. An obvious example is the care of diabetic retinopathy, which requires the screening of large numbers of patients (approximately 30 000 individuals per million total population ). Screening of diabetic retinopathy may reduce blindness in these patients by 50% and can provide considerable cost savings to public health systems. 9 Most methods, however, require identification of retinopathy by expensive, specifically trained personnel. A wholly automated approach involving fundus image analysis by computer could provide an immediate classification of retinopathy without the need for specialist opinions. Manual semiquantitative methods of image processing have been employed to provide faster and more accurate observation of the degree of macula oedema in fluorescein images. Progress has been made towards the development of a fully automated system to detect microaneurysms in digitised fluorescein angiograms. 16 Fluorescein angiogram images are good for observing some pathologies such as microaneurysms which are indicators of diabetic retinopathy. It is not an ideal method for an automatic screening system since it requires an injection of fluorescein into the body. This disadvantage makes the use of colour fundus images, which do not require an injection of fluorescein, more suitable for automatic",
"title": ""
},
{
"docid": "5506207c5d11a464b1bca39d6092089e",
"text": "Scalp recorded event-related potentials were used to investigate the neural activity elicited by emotionally negative and emotionally neutral words during the performance of a recognition memory task. Behaviourally, the principal difference between the two word classes was that the false alarm rate for negative items was approximately double that for the neutral words. Correct recognition of neutral words was associated with three topographically distinct ERP memory 'old/new' effects: an early, bilateral, frontal effect which is hypothesised to reflect familiarity-driven recognition memory; a subsequent left parietally distributed effect thought to reflect recollection of the prior study episode; and a late onsetting, right-frontally distributed effect held to be a reflection of post-retrieval monitoring. The old/new effects elicited by negative words were qualitatively indistinguishable from those elicited by neutral items and, in the case of the early frontal effect, of equivalent magnitude also. However, the left parietal effect for negative words was smaller in magnitude and shorter in duration than that elicited by neutral words, whereas the right frontal effect was not evident in the ERPs to negative items. These differences between neutral and negative words in the magnitude of the left parietal and right frontal effects were largely attributable to the increased positivity of the ERPs elicited by new negative items relative to the new neutral items. Together, the behavioural and ERP findings add weight to the view that emotionally valenced words influence recognition memory primarily by virtue of their high levels of 'semantic cohesion', which leads to a tendency for 'false recollection' of unstudied items.",
"title": ""
},
{
"docid": "28cba5bf535dabdfadfd1f634a574d52",
"text": "There are several complex business processes in the higher education. As the number of university students has been tripled in Hungary the automation of these task become necessary. The Near Field Communication (NFC) technology provides a good opportunity to support the automated execution of several education related processes. Recently a new challenge is identified at the Budapest University of Technology and Economics. As most of the lecture notes had become available in electronic format the students especially the inexperienced freshman ones did not attend to the lectures significantly decreasing the rate of successful exams. This drove to the decision to elaborate an accurate and reliable information system for monitoring the student's attendance at the lectures. Thus we have developed a novel, NFC technology based business use case of student attendance monitoring. In order to meet the requirements of the use case we have implemented a highly autonomous distributed environment assembled by NFC enabled embedded devices, so-called contactless terminals and a scalable backoffice. Beside the opportunity of contactless card based student identification the terminals support biometric identification by fingerprint reading. These features enable the implementation of flexible and secure identification scenarios. The attendance monitoring use case has been tested in a pilot project involving about 30 access terminals and more that 1000 students. In this paper we are introducing the developed attendance monitoring use case, the implemented NFC enabled system, and the experiences gained during the pilot project.",
"title": ""
},
{
"docid": "7f84e215df3d908249bde3be7f2b3cab",
"text": "With the emergence of ever-growing advanced vehicular applications, the challenges to meet the demands from both communication and computation are increasingly prominent. Without powerful communication and computational support, various vehicular applications and services will still stay in the concept phase and cannot be put into practice in the daily life. Thus, solving this problem is of great importance. The existing solutions, such as cellular networks, roadside units (RSUs), and mobile cloud computing, are far from perfect because they highly depend on and bear the cost of additional infrastructure deployment. Given tremendous number of vehicles in urban areas, putting these underutilized vehicular resources into use offers great opportunity and value. Therefore, we conceive the idea of utilizing vehicles as the infrastructures for communication and computation, named vehicular fog computing (VFC), which is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. By aggregating abundant resources of individual vehicles, the quality of services and applications can be enhanced greatly. In particular, by discussing four types of scenarios of moving and parked vehicles as the communication and computational infrastructures, we carry on a quantitative analysis of the capacities of VFC. We unveil an interesting relationship among the communication capability, connectivity, and mobility of vehicles, and we also find out the characteristics about the pattern of parking behavior, which benefits from the understanding of utilizing the vehicular resources. Finally, we discuss the challenges and open problems in implementing the proposed VFC system as the infrastructures. Our study provides insights for this novel promising paradigm, as well as research topics about vehicular information infrastructures.",
"title": ""
},
{
"docid": "732f651b2ec4570a1229d8427b166c84",
"text": "hundreds of specialized apps? Who could have anticipated the power of our everyday devices to capture our every moment and movement? Cameras, GPS tracking, sensors—a phone is no longer just a phone; it is a powerful personal computing device loaded with access to interactive services that you carry with you everywhere you go. In response to these technological changes, user populations have diversified and grown. Once limited to workplaces and used only by experts, interactive computational devices and applications are now widely available for everyday use, anywhere, anytime by any and all of us. Though complex institutional infrastructures and communications networks still provide the backbone of our digital communications world, HCI research has strongly affected the marketability of these new technologies and networked systems. Human-computer interaction is a discipline concerned with the design, evaluation, and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them. —Thomas T. Hewett et al., 1992",
"title": ""
},
{
"docid": "d6f235abee285021a733b79b6d9c4411",
"text": "We address the problem of inverse reinforcement learning in Markov decision processes where the agent is risk-sensitive. In particular, we model risk-sensitivity in a reinforcement learning framework by making use of models of human decision-making having their origins in behavioral psychology, behavioral economics, and neuroscience. We propose a gradient-based inverse reinforcement learning algorithm that minimizes a loss function defined on the observed behavior. We demonstrate the performance of the proposed technique on two examples, the first of which is the canonical Grid World example and the second of which is a Markov decision process modeling passengers decisions regarding ride-sharing. In the latter, we use pricing and travel time data from a ride-sharing company to construct the transition probabilities and rewards of the Markov decision process.",
"title": ""
},
{
"docid": "db55d7b7e0185d872b27c89c3892a289",
"text": "Bitcoin relies on the Unspent Transaction Outputs (UTXO) set to efficiently verify new generated transactions. Every unspent output, no matter its type, age, value or length is stored in every full node. In this paper we introduce a tool to study and analyze the UTXO set, along with a detailed description of the set format and functionality. Our analysis includes a general view of the set and quantifies the difference between the two existing formats up to the date. We also provide an accurate analysis of the volume of dust and unprofitable outputs included in the set, the distribution of the block height in which the outputs where included, and the use of non-standard outputs.",
"title": ""
},
{
"docid": "94a2b34eaa02ffeffdde5aa74e7836d2",
"text": "Drought is a stochastic natural hazard that is instigated by intense and persistent shortage of precipitation. Following an initial meteorological phenomenon, subsequent impacts are realized on agriculture and hydrology. Among the natural hazards, droughts possess certain unique features; in addition to delayed effects, droughts vary by multiple dynamic dimensions including severity and duration, which in addition to causing a pervasive and subjective network of impacts makes them difficult to characterize. In order manage drought, drought characterization is essential enabling both retrospective analyses (e.g., severity versus impacts analysis) and prospective planning (e.g., risk assessment). The adaptation of a simplified method by drought indices has facilitated drought characterization for various users and entities. More than 100 drought indices have so far been proposed, some of which are operationally used to characterize drought using gridded maps at regional and national levels. These indices correspond to different types of drought, including meteorological, agricultural, and hydrological drought. By quantifying severity levels and declaring drought’s start and end, drought indices currently aid in a variety of operations including drought early warning and monitoring and contingency planning. Given their variety and ongoing development, it is crucial to provide a comprehensive overview of available drought indices that highlights their difference and examines the trend in their development. This paper reviews 74 operational and proposed drought indices and describes research directions.",
"title": ""
},
{
"docid": "5dc78e62ca88a6a5f253417093e2aa4d",
"text": "This paper surveys the scientific and trade literature on cybersecurity for unmanned aerial vehicles (UAV), concentrating on actual and simulated attacks, and the implications for small UAVs. The review is motivated by the increasing use of small UAVs for inspecting critical infrastructures such as the electric utility transmission and distribution grid, which could be a target for terrorism. The paper presents a modified taxonomy to organize cyber attacks on UAVs and exploiting threats by Attack Vector and Target. It shows that, by Attack Vector, there has been one physical attack and ten remote attacks. By Target, there have been six attacks on GPS (two jamming, four spoofing), two attacks on the control communications stream (a deauthentication attack and a zero-day vulnerabilities attack), and two attacks on data communications stream (two intercepting the data feed, zero executing a video replay attack). The paper also divides and discusses the findings by large or small UAVs, over or under 25 kg, but concentrates on small UAVs. The survey concludes that UAV-related research to counter cybersecurity threats focuses on GPS Jamming and Spoofing, but ignores attacks on the controls and data communications stream. The gap in research on attacks on the data communications stream is concerning, as an operator can see a UAV flying off course due to a control stream attack but has no way of detecting a video replay attack (substitution of a video feed).",
"title": ""
},
{
"docid": "fe5aebde601f7f44cfb87e9eea268fef",
"text": "Mining with big data or big data mining has become an active research area. It is very difficult using current methodologies and data mining software tools for a single personal computer to efficiently deal with very large datasets. The parallel and cloud computing platforms are considered a better solution for big data mining. The concept of parallel computing is based on dividing a large problem into smaller ones and each of them is carried out by one single processor individually. In addition, these processes are performed concurrently in a distributed and parallel manner. There are two common methodologies used to tackle the big data problem. The first one is the distributed procedure based on the data parallelism paradigm, where a given big dataset can be manually divided into n subsets, and n algorithms are respectively executed for the corresponding n subsets. The final result can be obtained from a combination of the outputs produced by the n algorithms. The second one is the MapReduce based procedure under the cloud computing platform. This procedure is composed of the map and reduce processes, in which the former performs filtering and * Corresponding author: Chih-Fong Tsai Department of Information Management, National Central University, Taiwan; Tel: +886-3-422-7151 ; Fax: +886-3-4254604 E-mail address: [email protected]",
"title": ""
},
{
"docid": "bc0ca1e4f698fff9277e5bbcf8c8b797",
"text": "This paper presents a hybrid method combining a vector fitting (VF) and a global optimization for diagnosing coupled resonator bandpass filters. The method can extract coupling matrix from the measured or electromagnetically simulated admittance parameters (Y -parameters) of a narrow band coupled resonator bandpass filter with losses. The optimization method is used to remove the phase shift effects of the measured or the EM simulated Y -parameters caused by the loaded transmission lines at the input/output ports of a filter. VF is applied to determine the complex poles and residues of the Y -parameters without phase shift. The coupling matrix can be extracted (also called the filter diagnosis) by these complex poles and residues. The method can be used to computer-aided tuning (CAT) of a filter in the stage of this filter design and/or product process to accelerate its physical design. Three application examples illustrate the validity of the proposed method.",
"title": ""
},
{
"docid": "1cdbeb23bf32c20441a208b3c3a05480",
"text": "Indoor object localization can enable many ubicomp applications, such as asset tracking and object-related activity recognition. Most location and tracking systems rely on either battery-powered devices which create cost and maintenance issues or cameras which have accuracy and privacy issues. This paper introduces a system that is able to detect the 3D position and motion of a battery-free RFID tag embedded with an ultrasound detector and an accelerometer. Combining tags' acceleration with location improves the system's power management and supports activity recognition. We characterize the system's localization performance in open space as well as implement it in a smart wet lab application. The system is used to track real-time location and motion of the tags in the wet lab as well as recognize pouring actions performed on the objects to which the tag is attached. The median localization accuracy is 7.6cm -- (3.1, 5, 1.9)cm for each (x, y, z) axis -- with max update rates of 15 Sample/s using single RFID reader antenna.",
"title": ""
},
{
"docid": "b455105e5b82f6226198866f324132d1",
"text": "The creation of both a functionally and aesthetically pleasing nasal tip contour is demanding and depends on various different parameters. Typically, procedures are performed with emphasis on narrowing the nasal tip structure. Excisional techniques alone inevitably lead to a reduction in skeletal support and are often prone to unpredictable deformities. But also long-term results of classical suture techniques have shown unfavorable outcomes. Particularly, pinching of the ala and a displacement of the caudal margin of the lateral crus below the cephalic margin belong to this category. A characteristic loss of structural continuity between the domes and the alar lobule and an undesirable shadowing occur. These effects lead to an unnatural appearance of the nasal tip and frequently to impaired nasal breathing. Stability and configuration of the alar cartilages alone do not allow for an adequate evaluation of the nasal tip contour. Rather a three-dimensional approach is required to describe all nasal tip structures. Especially, the rotational angle of the alar surface as well as the longitudinal axis of the lateral crus in relation to cranial septum should be considered in the three-dimensional analysis. Taking the various parameters into account, the authors present new aspects in nasal tip surgery which contribute to the creation of a functionally and aesthetically pleasing as well as durable nasal tip contour.",
"title": ""
},
{
"docid": "9af703a47d382926698958fba88c1e1a",
"text": "Nowadays, the use of agile software development methods like Scrum is common in industry and academia. Considering the current attacking landscape, it is clear that developing secure software should be a main concern in all software development projects. In traditional software projects, security issues require detailed planning in an initial planning phase, typically resulting in a detailed security analysis (e.g., threat and risk analysis), a security architecture, and instructions for security implementation (e.g., specification of key sizes and cryptographic algorithms to use). Agile software development methods like Scrum are known for reducing the initial planning phases (e.g., sprint 0 in Scrum) and for focusing more on producing running code. Scrum is also known for allowing fast adaption of the emerging software to changes of customer wishes. For security, this means that it is likely that there are no detailed security architecture or security implementation instructions from the start of the project. It also means that a lot of design decisions will be made during the runtime of the project. Hence, to address security in Scrum, it is necessary to consider security issues throughout the whole software development process. Secure Scrum is a variation of the Scrum framework with special focus on the development of secure software throughout the whole software development process. It puts emphasis on implementation of security related issues without the need of changing the underlying Scrum process or influencing team dynamics. Secure Scrum allows even non-security experts to spot security issues, to implement security features, and to verify implementations. A field test of Secure Scrum shows that the security level of software developed using Secure Scrum is higher then the security level of software developed using standard Scrum.",
"title": ""
},
{
"docid": "64d9f6973697749b6e2fa330101cbc77",
"text": "Evidence is presented that recognition judgments are based on an assessment of familiarity, as is described by signal detection theory, but that a separate recollection process also contributes to performance. In 3 receiver-operating characteristics (ROC) experiments, the process dissociation procedure was used to examine the contribution of these processes to recognition memory. In Experiments 1 and 2, reducing the length of the study list increased the intercept (d') but decreased the slope of the ROC and increased the probability of recollection but left familiarity relatively unaffected. In Experiment 3, increasing study time increased the intercept but left the slope of the ROC unaffected and increased both recollection and familiarity. In all 3 experiments, judgments based on familiarity produced a symmetrical ROC (slope = 1), but recollection introduced a skew such that the slope of the ROC decreased.",
"title": ""
}
] |
scidocsrr
|
d211ce71eee620d3c1ec4cf4a098d158
|
Robust end-to-end deep audiovisual speech recognition
|
[
{
"docid": "a3bff96ab2a6379d21abaea00bc54391",
"text": "In view of the advantages of deep networks in producing useful representation, the generated features of different modality data (such as image, audio) can be jointly learned using Multimodal Restricted Boltzmann Machines (MRB-M). Recently, audiovisual speech recognition based the M-RBM has attracted much attention, and the MRBM shows its effectiveness in learning the joint representation across audiovisual modalities. However, the built networks have weakness in modeling the multimodal sequence which is the natural property of speech signal. In this paper, we will introduce a novel temporal multimodal deep learning architecture, named as Recurrent Temporal Multimodal RB-M (RTMRBM), that models multimodal sequences by transforming the sequence of connected MRBMs into a probabilistic series model. Compared with existing multimodal networks, it's simple and efficient in learning temporal joint representation. We evaluate our model on audiovisual speech datasets, two public (AVLetters and AVLetters2) and one self-build. The experimental results demonstrate that our approach can obviously improve the accuracy of recognition compared with standard MRBM and the temporal model based on conditional RBM. In addition, RTMRBM still outperforms non-temporal multimodal deep networks in the presence of the weakness of long-term dependencies.",
"title": ""
}
] |
[
{
"docid": "54c2914107ae5df0a825323211138eb9",
"text": "An implicit, but pervasive view in the information science community is that people are perpetual seekers after new public information, incessantly identifying and consuming new information by browsing the Web and accessing public collections. One aim of this review is to move beyond this consumer characterization, which regards information as a public resource containing novel data that we seek out, consume, and then discard. Instead, I want to focus on a very different view: where familiar information is used as a personal resource that we keep, manage, and (sometimes repeatedly) exploit. I call this information curation. I first summarize limitations of the consumer perspective. I then review research on three different information curation processes: keeping, management, and exploitation. I describe existing work detailing how each of these processes is applied to different types of personal data: documents, e-mail messages, photos, and Web pages. The research indicates people tend to keep too much information, with the exception of contacts and Web pages. When managing information, strategies that rely on piles as opposed to files provide surprising benefits. And in spite of the emergence of desktop search, exploitation currently remains reliant on manual methods such as navigation. Several new technologies have the potential to address important",
"title": ""
},
{
"docid": "180672be0e49be493d9af3ef7b558804",
"text": "Causality is a very intuitive notion that is difficult to make precise without lapsing into tautology. Two ingredients are central to any definition: (1) a set of possible outcomes (counterfactuals) generated by a function of a set of ‘‘factors’’ or ‘‘determinants’’ and (2) a manipulation where one (or more) of the ‘‘factors’’ or ‘‘determinants’’ is changed. An effect is realized as a change in the argument of a stable function that produces the same change in the outcome for a class of interventions that change the ‘‘factors’’ by the same amount. The outcomes are compared at different levels of the factors or generating variables. Holding all factors save one at a constant level, the change in the outcome associated with manipulation of the varied factor is called a causal effect of the manipulated factor. This definition, or some version of it, goes back to Mill (1848) and Marshall (1890). Haavelmo’s (1943) made it more precise within the context of linear equations models. The phrase ‘ceteris paribus’ (everything else held constant) is a mainstay of economic analysis",
"title": ""
},
{
"docid": "87a319361ad48711eff002942735258f",
"text": "This paper describes an innovative principle for climbing obstacles with a two-axle and four-wheel robot with articulated frame. It is based on axle reconfiguration while ensuring permanent static stability. A simple example is demonstrated based on the OpenWHEEL platform with a serial mechanism connecting front and rear axles of the robot. A generic tridimensional multibody simulation is provided with Adams software. It permits to validate the concept and to get an approach of control laws for every type of inter-axle mechanism. This climbing principle permits to climb obstacles as high as the wheel while keeping energetic efficiency of wheel propulsion and using only one supplemental actuator. Applications to electric wheelchairs, quads and all terrain vehicles (ATV) are envisioned",
"title": ""
},
{
"docid": "e47ec55000621d81f665f7d01a1a8553",
"text": "Plant pest recognition and detection is vital for f od security, quality of life and a stable agricult ural economy. This research demonstrates the combination of the k -m ans clustering algorithm and the correspondence filter to achieve pest detection and recognition. The detecti on of the dataset is achieved by partitioning the d ata space into Voronoi cells, which tends to find clusters of comparable spatial extents, thereby separating the obj cts (pests) from the background (pest habitat). The det ction is established by extracting the variant dis inctive attributes between the pest and its habitat (leaf, stem) and using the correspondence filter to identi fy the plant pests to obtain correlation peak values for differe nt datasets. This work further establishes that the recognition probability from the pest image is directly proport i nal to the height of the output signal and invers ely proportional to the viewing angles, which further c onfirmed that the recognition of plant pests is a f unction of their position and viewing angle. It is encouraging to note that the correspondence filter can achieve rotational invariance of pests up to angles of 360 degrees, wh ich proves the effectiveness of the algorithm for t he detection and recognition of plant pests.",
"title": ""
},
{
"docid": "14c981a63e34157bb163d4586502a059",
"text": "In this paper, we investigate an angle of arrival (AoA) and angle of departure (AoD) estimation algorithm for sparse millimeter wave multiple-input multiple-output (MIMO) channels. The analytical channel model whose use we advocate here is the beam space (or virtual) MIMO channel representation. By leveraging the beam space MIMO concept, we characterize probabilistic channel priors under an analog precoding and combining constraints. This investigation motivates Bayesian inference approaches to virtual AoA and AoD estimation. We divide the estimation task into downlink sounding for AoA estimation and uplink sounding for AoD estimation. A belief propagation (BP)-type algorithm is adopted, leading to computationally efficient approximate message passing (AMP) and approximate log-likelihood ratio testing (ALLRT) algorithms. Numerical results demonstrate that the proposed algorithm outperforms the conventional AMP in terms of the AoA and AoD estimation accuracy for the sparse millimeter wave MIMO channel.",
"title": ""
},
{
"docid": "52462bd444f44910c18b419475a6c235",
"text": "Snoring is a common symptom of serious chronic disease known as obstructive sleep apnea (OSA). Knowledge about the location of obstruction site (VVelum, OOropharyngeal lateral walls, T-Tongue, E-Epiglottis) in the upper airways is necessary for proper surgical treatment. In this paper we propose a dual source-filter model similar to the source-filter model of speech to approximate the generation process of snore audio. The first filter models the vocal tract from lungs to the point of obstruction with white noise excitation from the lungs. The second filter models the vocal tract from the obstruction point to the lips/nose with impulse train excitation which represents vibrations at the point of obstruction. The filter coefficients are estimated using the closed and open phases of the snore beat cycle. VOTE classification is done by using SVM classifier and filter coefficients as features. The classification experiments are performed on the development set (283 snore audios) of the MUNICH-PASSAU SNORE SOUND CORPUS (MPSSC). We obtain an unweighted average recall (UAR) of 49.58%, which is higher than the INTERSPEECH-2017 snoring sub-challenge baseline technique by ∼3% (absolute).",
"title": ""
},
{
"docid": "1448b02c9c14e086a438d76afa1b2fde",
"text": "This paper analyzes the classification of hyperspectral remote sensing images with linear discriminant analysis (LDA) in the presence of a small ratio between the number of training samples and the number of spectral features. In these particular ill-posed problems, a reliable LDA requires one to introduce regularization for problem solving. Nonetheless, in such a challenging scenario, the resulting regularized LDA (RLDA) is highly sensitive to the tuning of the regularization parameter. In this context, we introduce in the remote sensing community an efficient version of the RLDA recently presented by Ye to cope with critical ill-posed problems. In addition, several LDA-based classifiers (i.e., penalized LDA, orthogonal LDA, and uncorrelated LDA) are compared theoretically and experimentally with the standard LDA and the RLDA. Method differences are highlighted through toy examples and are exhaustively tested on several ill-posed problems related to the classification of hyperspectral remote sensing images. Experimental results confirm the effectiveness of the presented RLDA technique and point out the main properties of other analyzed LDA techniques in critical ill-posed hyperspectral image classification problems.",
"title": ""
},
{
"docid": "7edaef142ecf8a3825affc09ad10d73a",
"text": "Internet of Things (IoT) is a network of sensors, actuators, mobile and wearable devices, simply things that have processing and communication modules and can connect to the Internet. In a few years time, billions of such things will start serving in many fields within the concept of IoT. Self-configuration, autonomous device addition, Internet connection and resource limitation features of IoT causes it to be highly prone to the attacks. Denial of Service (DoS) attacks which have been targeting the communication networks for years, will be the most dangerous threats to IoT networks. This study aims to analyze and classify the DoS attacks that may target the IoT environments. In addition to this, the systems that try to detect and mitigate the DoS attacks to IoT will be evaluated.",
"title": ""
},
{
"docid": "65aa93b6ca41fe4ca54a4a7dee508db2",
"text": "The field of deep learning has seen significant advancement in recent years. However, much of the existing work has been focused on real-valued numbers. Recent work has shown that a deep learning system using the complex numbers can be deeper for a fixed parameter budget compared to its real-valued counterpart. In this work, we explore the benefits of generalizing one step further into the hyper-complex numbers, quaternions specifically, and provide the architecture components needed to build deep quaternion networks. We develop the theoretical basis by reviewing quaternion convolutions, developing a novel quaternion weight initialization scheme, and developing novel algorithms for quaternion batch-normalization. These pieces are tested in a classification model by end-to-end training on the CIFAR −10 and CIFAR −100 data sets and a segmentation model by end-to-end training on the KITTI Road Segmentation data set. These quaternion networks show improved convergence compared to real-valued and complex-valued networks, especially on the segmentation task, while having fewer parameters.",
"title": ""
},
{
"docid": "cb98fd6c850d9b3d9a2bac638b9f632d",
"text": "Artificial immune systems are a collection of algorithms inspired by the human immune system. Over the past 15 years, extensive research has been performed regarding the application of artificial immune systems to computer security. However, existing immune-inspired techniques have not performed as well as expected when applied to the detection of intruders in computer systems. In this thesis the development of the Dendritic Cell Algorithm is described. This is a novel immune-inspired algorithm based on the function of the dendritic cells of the human immune system. In nature, dendritic cells function as natural anomaly detection agents, instructing the immune system to respond if stress or damage is detected. Dendritic cells are a crucial cell in the detection and combination of ‘signals’ which provide the immune system with a sense of context. The Dendritic Cell Algorithm is based on an abstract model of dendritic cell behaviour, with the abstraction process performed in close collaboration with immunologists. This algorithm consists of components based on the key properties of dendritic cell behaviour, which involves data fusion and correlation components. In this algorithm, four categories of input signal are used. The resultant algorithm is formally described in this thesis and is validated on a standard machine learning dataset. The validation process shows that the Dendritic Cell Algorithm can be applied to static datasets and suggests that the algorithm is suitable for the analysis of time-dependent data. Further analysis and evaluation of the Dendritic Cell Algorithm is performed. This is assessed through the algorithm’s application to the detection of anomalous port scans. The results of this investigation show that the Dendritic Cell Algorithm can be applied to detection problems in real-time. This analysis also shows that detection with this algorithm produces high rates of false positives and high rates of true positives, in addition to being robust against modification to system parameters. The limitations of the Dendritic Cell Algorithm are also evaluated and presented, including loss of sensitivity and the generation of false positives under certain circumstances. It is shown that the Dendritic Cell Algorithm can perform well as an anomaly detection algorithm and can be applied to real-world, realtime data.",
"title": ""
},
{
"docid": "fde78187088da4d4b8fe4cb0f959b860",
"text": "The key question raised in this research in progress paper is whether the development stage of a (hardware) startup can give an indication of the crowdfunding type it decides to choose. Throughout the paper, I empirically investigate the German crowdfunding landscape and link it to startups in the hardware sector, picking up the proposed notion of an emergent hardware renaissance. To identify the potential points of contact between crowdfunds and startups, an evaluation of different startup stage models with regard to funding requirements is provided, as is an overview of currently used crowdfunding typologies. The example of two crowdfunding platforms (donation and non-monetary reward crowdfunding vs. equity-based crowdfunding) and their respective hardware projects and startups is used to highlight the potential of this research in progress. 1 Introduction Originally motivated by Paul Graham's 'The Hardware Renaissance' (2012) and further spurred by Witheiler's 'The hardware revolution will be crowdfunded' (2013), I chose to consider the intersection of startups, crowdfunding, and hardware. This is particularly interesting since literature on innovation and startup funding has indeed grown to some sophistication regarding the timing of more classic sources of capital in a startup's life, such as bootstrapping, business angel funding, and venture capital (cf. e.g., Schwienbacher & Larralde, 2012; Metrick & Yasuda, 2011). Due to the novelty of crowdfunding, however, general research on this type of funding is just at the beginning stages and many papers are rather focused on specific elements of the phenomenon (e.g., Belleflamme et al., 2013; Agrawal et al. 2011) and / or exploratory in nature (e.g., Mollick, 2013). What is missing is a verification of the research on potential points of contact between crowdfunds and startups. It remains unclear when crowdfunding is used—primarily during the early seed stage for example or equally at some later point as well—and what types apply (cf. e.g., Collins & Pierrakis, 2012). Simply put, the research question that emerges is whether the development stage of a startup can give an indication of the crowdfunding type it decides to choose. To further explore an answer to this question, I commenced an investigation of the German crowdfunding scene with a focus on hardware startups. Following desk research on platforms situated in German-speaking areas—Germany, Austria, Switzerland—, a categorization of the respectively used funding types is still in process, and transitions into a quantitative analysis and an in-depth case study-based assessment. The prime challenge of such an investigation …",
"title": ""
},
{
"docid": "d8da6bebb1ca8f00b176e1493ded4b9c",
"text": "This paper presents an efficient technique for the evaluation of different types of losses in substrate integrated waveguide (SIW). This technique is based on the Boundary Integral-Resonant Mode Expansion (BI-RME) method in conjunction with a perturbation approach. This method also permits to derive automatically multimodal and parametric equivalent circuit models of SIW discontinuities, which can be adopted for an efficient design of complex SIW circuits. Moreover, a comparison of losses in different types of planar interconnects (SIW, microstrip, coplanar waveguide) is presented.",
"title": ""
},
{
"docid": "7cd8dee294d751ec6c703d628e0db988",
"text": "A major component of secondary education is learning to write effectively, a skill which is bolstered by repeated practice with formative guidance. However, providing focused feedback to every student on multiple drafts of each essay throughout the school year is a challenge for even the most dedicated of teachers. This paper first establishes a new ordinal essay scoring model and its state of the art performance compared to recent results in the Automated Essay Scoring field. Extending this model, we describe a method for using prediction on realistic essay variants to give rubric-specific formative feedback to writers. This method is used in Revision Assistant, a deployed data-driven educational product that provides immediate, rubric-specific, sentence-level feedback to students to supplement teacher guidance. We present initial evaluations of this feedback generation, both offline and in deployment.",
"title": ""
},
{
"docid": "18247ea0349da81fe2cf93b3663b081f",
"text": "Nowadays, more and more companies migrate business from their own servers to the cloud. With the influx of computational requests, datacenters consume tremendous energy every day, attracting great attention in the energy efficiency dilemma. In this paper, we investigate the energy-aware resource management problem in cloud datacenters, where green energy with unpredictable capacity is connected. Via proposing a robust blockchain-based decentralized resource management framework, we save the energy consumed by the request scheduler. Moreover, we propose a reinforcement learning method embedded in a smart contract to further minimize the energy cost. Because the reinforcement learning method is informed from the historical knowledge, it relies on no request arrival and energy supply. Experimental results on Google cluster traces and real-world electricity price show that our approach is able to reduce the datacenters cost significantly compared with other benchmark algorithms.",
"title": ""
},
{
"docid": "7ddf437114258023cc7d9c6d51bb8f94",
"text": "We describe a framework for cooperative control of a group of nonholonomic mobile robots that allows us to build complex systems from simple controllers and estimators. The resultant modular approach is attractive because of the potential for reusability. Our approach to composition also guarantees stability and convergence in a wide range of tasks. There are two key features in our approach: 1) a paradigm for switching between simple decentralized controllers that allows for changes in formation; 2) the use of information from a single type of sensor, an omnidirectional camera, for all our controllers. We describe estimators that abstract the sensory information at different levels, enabling both decentralized and centralized cooperative control. Our results include numerical simulations and experiments using a testbed consisting of three nonholonomic robots.",
"title": ""
},
{
"docid": "b2db6db73699ecc66f33e2f277cf055b",
"text": "In this paper, we develop a new approach of spatially supervised recurrent convolutional neural networks for visual object tracking. Our recurrent convolutional network exploits the history of locations as well as the distinctive visual features learned by the deep neural networks. Inspired by recent bounding box regression methods for object detection, we study the regression capability of Long Short-Term Memory (LSTM) in the temporal domain, and propose to concatenate high-level visual features produced by convolutional networks with region information. In contrast to existing deep learning based trackers that use binary classification for region candidates, we use regression for direct prediction of the tracking locations both at the convolutional layer and at the recurrent unit. Our experimental results on challenging benchmark video tracking datasets show that our tracker is competitive with state-of-the-art approaches while maintaining low computational cost.",
"title": ""
},
{
"docid": "3b62ccd8e989d81f86b557e8d35a8742",
"text": "The ability to accurately judge the similarity between natural language sentences is critical to the performance of several applications such as text mining, question answering, and text summarization. Given two sentences, an effective similarity measure should be able to determine whether the sentences are semantically equivalent or not, taking into account the variability of natural language expression. That is, the correct similarity judgment should be made even if the sentences do not share similar surface form. In this work, we evaluate fourteen existing text similarity measures which have been used to calculate similarity score between sentences in many text applications. The evaluation is conducted on three different data sets, TREC9 question variants, Microsoft Research paraphrase corpus, and the third recognizing textual entailment data set.",
"title": ""
},
{
"docid": "216e38bb5e6585099e949572f7645ebf",
"text": "The graviperception of the hypotrichous ciliate Stylonychia mytilus was investigated using electrophysiological methods and behavioural analysis. It is shown that Stylonychia can sense gravity and thereby compensates sedimentation rate by a negative gravikinesis. The graviresponse consists of a velocity-regulating physiological component (negative gravikinesis) and an additional orientational component. The latter is largely based on a physical mechanism but might, in addition, be affected by the frequency of ciliary reversals, which is under physiological control. We show that the external stimulus of gravity is transformed to a physiological signal, activating mechanosensitive calcium and potassium channels. Earlier electrophysiological experiments revealed that these ion channels are distributed in the manner of two opposing gradients over the surface membrane. Here, we show, for the first time, records of gravireceptor potentials in Stylonychia that are presumably based on this two-gradient system of ion channels. The gravireceptor potentials had maximum amplitudes of approximately 4 mV and slow activation characteristics (0.03 mV s(-1)). The presumptive number of involved graviperceptive ion channels was calculated and correlates with the analysis of the locomotive behaviour.",
"title": ""
},
{
"docid": "23ac77f4ada235965c1474bd8d3b0829",
"text": "Oral lichen planus and oral lichenoid drug reactions have similar clinical and histologic findings. The onset of oral lichenoid drug reactions appears to correspond to the administration of medications, especially antihypertensive drugs, oral hypoglycemic drugs, antimalarial drugs, gold salts, penicillamine and others. The author reports the case of 58-year-old male patient with oral lichenoid drug reaction, hypertension and diabetes mellitus. The oral manifestation showed radiated white lines with erythematous and erosive areas. The patient experienced pain and a burning sensation when eating spicy food. A tissue biopsy was carried out and revealed the characteristics of lichen planus. The patient was treated with 0.1% fluocinolone acetonide in an orabase as well as the replacement of the oral hypoglycemic and antihypertensive agents. The lesions improved and the burning sensation disappeared in two weeks after treatment. No recurrence was observed in the follow-up after three months.",
"title": ""
},
{
"docid": "0947728fbeeda33a5ca88ad0bfea5258",
"text": "The cybersecurity community typically reacts to attacks after they occur. Being reactive is costly and can be fatal where attacks threaten lives, important data, or mission success. But can cybersecurity be done proactively? Our research capitalizes on the Germination Period—the time lag between hacker communities discussing software flaw types and flaws actually being exploited—where proactive measures can be taken. We argue for a novel proactive approach, utilizing big data, for (I) identifying potential attacks before they come to fruition; and based on this identification, (II) developing preventive countermeasures. The big data approach resulted in our vision of the Proactive Cybersecurity System (PCS), a layered, modular service platform that applies big data collection and processing tools to a wide variety of unstructured data sources to predict vulnerabilities and develop countermeasures. Our exploratory study is the first to show the promise of this novel proactive approach and illuminates challenges that need to be addressed.",
"title": ""
}
] |
scidocsrr
|
ffc920437de019647b81d41ec4a699b4
|
Whole Brain Segmentation Automated Labeling of Neuroanatomical Structures in the Human Brain
|
[
{
"docid": "d529b4f1992f438bb3ce4373090f8540",
"text": "One conventional tool for interpolating surfaces over scattered data, the thin-plate spline, has an elegant algebra expressing the dependence of the physical bending energy of a thin metal plate on point constraints. For interpolation of a surface over a fixed set of nodes in the plane, the bending energy is a quadratic form in the heights assigned to the surface. The spline is the superposition of eigenvectors of the bending energy matrix, of successively larger physical scales, over a tilted flat plane having no bending energy at all. When these splines are paired, one representing the x-coordinate of another form and the other the y-coordinate, they aid greatly in the modeling of biological shape change as deformation. In this context, the pair becomes an interpolation map from RZ to R' relating two sets of landmark points. The spline maps decompose, in the same way as the spline surfaces, into a linear part (an affine transformation) together with the superposition of principal warps, which are geometrically independent, affine-free deformations of progressively smaller geometrical scales. The warps decompose an empirical deformation into orthogonal features more or less as a conventional orthogonal functional analysis decomposes the single scene. This paper demonstrates the decomposition of deformations by principal warps, extends the method to deal with curving edges between landmarks, relates this formalism to other applications of splines current in computer vision, and indicates how they might aid in the extraction of features for analysis, comparison, and diagnosis of biological and medical images.",
"title": ""
},
{
"docid": "772fc1cf2dd2837227facd31f897dba3",
"text": "Eighty-three brains obtained at autopsy from nondemented and demented individuals were examined for extracellular amyloid deposits and intraneuronal neurofibrillary changes. The distribution pattern and packing density of amyloid deposits turned out to be of limited significance for differentiation of neuropathological stages. Neurofibrillary changes occurred in the form of neuritic plaques, neurofibrillary tangles and neuropil threads. The distribution of neuritic plaques varied widely not only within architectonic units but also from one individual to another. Neurofibrillary tangles and neuropil threads, in contrast, exhibited a characteristic distribution pattern permitting the differentiation of six stages. The first two stages were characterized by an either mild or severe alteration of the transentorhinal layer Pre-α (transentorhinal stages I–II). The two forms of limbic stages (stages III–IV) were marked by a conspicuous affection of layer Pre-α in both transentorhinal region and proper entorhinal cortex. In addition, there was mild involvement of the first Ammon's horn sector. The hallmark of the two isocortical stages (stages V–VI) was the destruction of virtually all isocortical association areas. The investigation showed that recognition of the six stages required qualitative evaluation of only a few key preparations.",
"title": ""
}
] |
[
{
"docid": "8147143579de86a5eeb668037c2b8c5d",
"text": "In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the bias/variance tradeoff. The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any learning procedure adheres to its training data. At one end of the scale (high variance), models can entertain very complex hypotheses, allowing them to fit a wide variety of data very closely--but as a result can generalize poorly, a phenomenon called overfitting. At the other end of the scale (high bias), models make relatively simple and inflexible assumptions, and as a result may fit the data poorly, called underfitting. Exemplar and prototype models of category formation are at opposite ends of this scale: prototype models are highly biased, in that they assume a simple, standard conceptual form (the prototype), while exemplar models have very little bias but high variance, allowing them to fit virtually any combination of training data. We investigated human learners' position on this spectrum by confronting them with category structures at variable levels of intrinsic complexity, ranging from simple prototype-like categories to much more complex multimodal ones. The results show that human learners adopt an intermediate point on the bias/variance continuum, inconsistent with either of the poles occupied by most conventional approaches. We present a simple model that adjusts (regularizes) the complexity of its hypotheses in order to suit the training data, which fits the experimental data better than representative exemplar and prototype models.",
"title": ""
},
{
"docid": "409b257d38faef216a1056fd7c548587",
"text": "Reservoir computing systems utilize dynamic reservoirs having short-term memory to project features from the temporal inputs into a high-dimensional feature space. A readout function layer can then effectively analyze the projected features for tasks, such as classification and time-series analysis. The system can efficiently compute complex and temporal data with low-training cost, since only the readout function needs to be trained. Here we experimentally implement a reservoir computing system using a dynamic memristor array. We show that the internal ionic dynamic processes of memristors allow the memristor-based reservoir to directly process information in the temporal domain, and demonstrate that even a small hardware system with only 88 memristors can already be used for tasks, such as handwritten digit recognition. The system is also used to experimentally solve a second-order nonlinear task, and can successfully predict the expected output without knowing the form of the original dynamic transfer function. Reservoir computing facilitates the projection of temporal input signals onto a high-dimensional feature space via a dynamic system, known as the reservoir. Du et al. realise this concept using metal-oxide-based memristors with short-term memory to perform digit recognition tasks and solve non-linear problems.",
"title": ""
},
{
"docid": "42b6c55e48f58e3e894de84519cb6feb",
"text": "What social value do Likes on Facebook hold? This research examines peopleâs attitudes and behaviors related to receiving one-click feedback in social media. Likes and other kinds of lightweight affirmation serve as social cues of acceptance and maintain interpersonal relationships, but may mean different things to different people. Through surveys and de-identified, aggregated behavioral Facebook data, we find that in general, people care more about who Likes their posts than how many Likes they receive, desiring feedback most from close friends, romantic partners, and family members other than their parents. While most people do not feel strongly that receiving “enough” Likes is important, roughly two-thirds of posters regularly receive more than “enough.” We also note a “Like paradox,” a phenomenon in which peopleâs friends receive more Likes because their friends have more friends to provide those Likes. Individuals with lower levels of self-esteem and higher levels of self-monitoring are more likely to think that Likes are important and to feel bad if they do not receive “enough” Likes. The results inform product design and our understanding of how lightweight interactions shape our experiences online.",
"title": ""
},
{
"docid": "c0ba7119eaf77c6815f43ff329457e5e",
"text": "In Utility Computing business model, the owners of the computing resources negotiate with their potential clients to sell computing power. The terms of the Quality of Service (QoS) and the economic conditions are established in a Service-Level Agreement (SLA). There are many scenarios in which the agreed QoS cannot be provided because of errors in the service provisioning or failures in the system. Since providers have usually different types of clients, according to their relationship with the provider or by the fee that they pay, it is important to minimize the impact of the SLA violations in preferential clients. This paper proposes a set of policies to provide better QoS to preferential clients in such situations. The criterion to classify clients is established according to the relationship between client and provider (external user, internal or another privileged relationship) and the QoS that the client purchases (cheap contracts or extra QoS by paying an extra fee). Most of the policies use key features of virtualization: Selective Violation of the SLAs, Dynamic Scaling of the Allocated Resources, and Runtime Migration of Tasks. The validity of the policies is demonstrated through exhaustive experiments.",
"title": ""
},
{
"docid": "6cacb8cdc5a1cc17c701d4ffd71bdab1",
"text": "Phishing costs Internet users billions of dollars a year. Using various data sets collected in real-time, this paper analyzes various aspects of phisher modi operandi. We examine the anatomy of phishing URLs and domains, registration of phishing domains and time to activation, and the machines used to host the phishing sites. Our findings can be used as heuristics in filtering phishing-related emails and in identifying suspicious domain registrations.",
"title": ""
},
{
"docid": "b6e62590995a41adb1128703060e0e2d",
"text": "Consumer-grade digital fabrication such as 3D printing is on the rise, and we believe it can be leveraged to great benefit in the arena of special education. Although 3D printing is beginning to infiltrate mainstream education, little to no research has explored 3D printing in the context of students with special support needs. We present a formative study exploring the use of 3D printing at three locations serving populations with varying ability, including individuals with cognitive, motor, and visual impairments. We found that 3D design and printing performs three functions in special education: developing 3D design and printing skills encourages STEM engagement; 3D printing can support the creation of educational aids for providing accessible curriculum content; and 3D printing can be used to create custom adaptive devices. In addition to providing opportunities to students, faculty, and caregivers in their efforts to integrate 3D printing in special education settings, our investigation also revealed several concerns and challenges. We present our investigation at three diverse sites as a case study of 3D printing in the realm of special education, discuss obstacles to efficient 3D printing in this context, and offer suggestions for designers and technologists.",
"title": ""
},
{
"docid": "63262d2a9abdca1d39e31d9937bb41cf",
"text": "A structural model is presented for synthesizing binaural sound from a monaural source. The model produces well-controlled vertical as well as horizontal effects. The model is based on a simplified time-domain description of the physics of wave propagation and diffraction. The components of the model have a one-to-one correspondence with the physical sources of sound diffraction, delay, and reflection. The simplicity of the model permits efficient implementation in DSP hardware, and thus facilitates real-time operation. Additionally, the parameters in the model can be adjusted to fit a particular individual’s characteristics, thereby producing individualized head-related transfer functions. Experimental tests verify the perceptual effectiveness of the approach.",
"title": ""
},
{
"docid": "5487dd1976a164447c821303b53ebdf8",
"text": "Rapid and pervasive digitization of innovation processes and outcomes has upended extant theories on innovation management by calling into question fundamental assumptions about the definitional boundaries for innovation, agency for innovation, and the relationship between innovation processes and outcomes. There is a critical need for novel theorizing on digital innovation management that does not rely on such assumptions and draws on the rich and rapidly emerging research on digital technologies. We offer suggestions for such theorizing in the form of four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world. These logics can open new avenues for researchers to contribute to this important area. Our suggestions in this paper, coupled with the six research notes included in the special issue on digital innovation management, seek to offer a broader foundation for reinventing innovation management research in a digital world.",
"title": ""
},
{
"docid": "a959b14468625cb7692de99a986937c4",
"text": "In this paper, we describe a novel method for searching and comparing 3D objects. The method encodes the geometric and topological information in the form of a skeletal graph and uses graph matching techniques to match the skeletons and to compare them. The skeletal graphs can be manually annotated to refine or restructure the search. This helps in choosing between a topological similarity and a geometric (shape) similarity. A feature of skeletal matching is the ability to perform part-matching, and its inherent intuitiveness, which helps in defining the search and in visualizing the results. Also, the matching results, which are presented in a per-node basis can be used for driving a number of registration algorithms, most of which require a good initial guess to perform registration. In this paper, we also describe a visualization tool to aid in the selection and specification of the matched objects.",
"title": ""
},
{
"docid": "afd1bc554857a1857ac4be5ee37cc591",
"text": "0953-5438/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.intcom.2011.04.007 ⇑ Corresponding author. E-mail addresses: [email protected] (M.J. Co (J. Gwizdka), [email protected] (C. Liu), ralf@b rutgers.edu (N.J. Belkin), [email protected] (X. Zh We report on an investigation into people’s behaviors on information search tasks, specifically the relation between eye movement patterns and task characteristics. We conducted two independent user studies (n = 32 and n = 40), one with journalism tasks and the other with genomics tasks. The tasks were constructed to represent information needs of these two different users groups and to vary in several dimensions according to a task classification scheme. For each participant we classified eye gaze data to construct models of their reading patterns. The reading models were analyzed with respect to the effect of task types and Web page types on reading eye movement patterns. We report on relationships between tasks and individual reading behaviors at the task and page level. Specifically we show that transitions between scanning and reading behavior in eye movement patterns and the amount of text processed may be an implicit indicator of the current task type facets. This may be useful in building user and task models that can be useful in personalization of information systems and so address design demands driven by increasingly complex user actions with information systems. One of the contributions of this research is a new methodology to model information search behavior and investigate information acquisition and cognitive processing in interactive information tasks. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0046aca3e98d75f9d3c414a6de42e017",
"text": "Fast Downward is a classical planning system based on heuris tic search. It can deal with general deterministic planning problems encoded in the propos itional fragment of PDDL2.2, including advanced features like ADL conditions and effects and deriv ed predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a pro gression planner, searching the space of world states of a planning task in the forward direct ion. However, unlike other PDDL planning systems, Fast Downward does not use the propositional P DDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multivalued planning tasks , which makes many of the implicit constraints of a propositi nal planning task explicit. Exploiting this alternative representatio n, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic fun ction, called thecausal graph heuristic , which is very different from traditional HSP-like heuristi cs based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downward’s app roach to solving multi-valued planning tasks. We extend our earlier discussion of the caus al graph heuristic to tasks involving axioms and conditional effects and present some novel techn iques for search control that are used within Fast Downward’s best-first search algorithm: preferred operatorstransfer the idea of helpful actions from local search to global best-first search, deferred evaluationof heuristic functions mitigates the negative effect of large branching factors on earch performance, and multi-heuristic best-first searchcombines several heuristic evaluation functions within a s ingle search algorithm in an orthogonal way. We also describe efficient data structu es for fast state expansion ( successor generatorsandaxiom evaluators ) and present a new non-heuristic search algorithm called focused iterative-broadening search , which utilizes the information encoded in causal graphs in a ovel way. Fast Downward has proven remarkably successful: It won the “ classical” (i. e., propositional, non-optimising) track of the 4th International Planning Co mpetition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions a d provide some insights about the usefulness of the new search enhancements.",
"title": ""
},
{
"docid": "7647993815a13899e60fdc17f91e270d",
"text": "of Dissertation presented to COPPE/UFRJ as a partial fulfillment of the requirements for the degree of Master of Science (M.Sc.) WHEN AUTOENCODERS MEET RECOMMENDER SYSTEMS: COFILS APPROACH Julio César Barbieri Gonzalez de Almeida",
"title": ""
},
{
"docid": "71a76b562681450b23c512d4710c9f00",
"text": "The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.",
"title": ""
},
{
"docid": "c70383b0a3adb6e697932ef4b02877ac",
"text": "Betweenness centrality (BC) is a crucial graph problem that measures the significance of a vertex by the number of shortest paths leading through it. We propose Maximal Frontier Betweenness Centrality (MFBC): a succinct BC algorithm based on novel sparse matrix multiplication routines that performs a factor of p1/3 less communication on p processors than the best known alternatives, for graphs with n vertices and average degree k = n/p2/3. We formulate, implement, and prove the correctness of MFBC for weighted graphs by leveraging monoids instead of semirings, which enables a surprisingly succinct formulation. MFBC scales well for both extremely sparse and relatively dense graphs. It automatically searches a space of distributed data decompositions and sparse matrix multiplication algorithms for the most advantageous configuration. The MFBC implementation outperforms the well-known CombBLAS library by up to 8x and shows more robust performance. Our design methodology is readily extensible to other graph problems.",
"title": ""
},
{
"docid": "826e01210bb9ce8171ed72043b4a304d",
"text": "Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.",
"title": ""
},
{
"docid": "1886f5d95b1db7c222bc23770835e2b7",
"text": "Signature files and inverted files are well-known index structures. In this paper we undertake a direct comparison of the two for searching for partially-specified queries in a large lexicon stored in main memory. Using n-grams to index lexicon terms, a bit-sliced signature file can be compressed to a smaller size than an inverted file if each n-gram sets only one bit in the term signature. With a signature width less than half the number of unique n-grams in the lexicon, the signature file method is about as fast as the inverted file method, and significantly smaller. Greater flexibility in memory usage and faster index generation time make signature files appropriate for searching large lexicons or other collections in an environment where memory is at a premium.",
"title": ""
},
{
"docid": "95514c6f357115ef181b652eedd780fd",
"text": "Application Programming Interfaces (APIs) are a tremendous resource—that is, when they are stable. Several studies have shown that this is unfortunately not the case. Of those, a large-scale study of API changes in the Pharo Smalltalk ecosystem documented several findings about API deprecations and their impact on API clients. We conduct a partial replication of this study, considering more than 25,000 clients of five popular Java APIs on GitHub. This work addresses several shortcomings of the previous study, namely: a study of several distinct API clients in a popular, statically-typed language, with more accurate version information. We compare and contrast our findings with the previous study and highlight new ones, particularly on the API client update practices and the startling similarities between reaction behavior in Smalltalk and Java.",
"title": ""
},
{
"docid": "70f1f5de73c3a605b296299505fd4e61",
"text": "Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for this intractable averaging operation consists in scaling the layers undergoing dropout randomization. This simple rule called “standard dropout” is efficient, but might degrade the accuracy of the prediction. In this work we introduce a novel approach, coined “dropout distillation”, that allows us to train a predictor in a way to better approximate the intractable, but preferable, averaging process, while keeping under control its computational efficiency. We are thus able to construct models that are as efficient as standard dropout, or even more efficient, while being more accurate. Experiments on standard benchmark datasets demonstrate the validity of our method, yielding consistent improvements over conventional dropout.",
"title": ""
},
{
"docid": "0932bc0e6eafeeb8b64d7b41ca820ac8",
"text": "A novel, non-invasive, imaging methodology, based on the photoacoustic effect, is introduced in the context of artwork diagnostics with emphasis on the uncovering of hidden features such as underdrawings or original sketch lines in paintings. Photoacoustic microscopy, a rapidly growing imaging method widely employed in biomedical research, exploits the ultrasonic acoustic waves, generated by light from a pulsed or intensity modulated source interacting with a medium, to map the spatial distribution of absorbing components. Having over three orders of magnitude higher transmission through strongly scattering media, compared to light in the visible and near infrared, the photoacoustic signal offers substantially improved detection sensitivity and achieves excellent optical absorption contrast at high spatial resolution. Photoacoustic images, collected from miniature oil paintings on canvas, illuminated with a nanosecond pulsed Nd:YAG laser at 1064 nm on their reverse side, reveal clearly the presence of pencil sketch lines coated over by several paint layers, exceeding 0.5 mm in thickness. By adjusting the detection bandwidth of the optically induced ultrasonic waves, photoacoustic imaging can be used for looking into a broad variety of artefacts having diverse optical properties and geometrical profiles, such as manuscripts, glass objects, plastic modern art or even stone sculpture.",
"title": ""
}
] |
scidocsrr
|
0f7906ae6cc949541333e43ff695879a
|
Statistical transformer networks: learning shape and appearance models via self supervision
|
[
{
"docid": "de1f35d0e19cafc28a632984f0411f94",
"text": "Large-pose face alignment is a very challenging problem in computer vision, which is used as a prerequisite for many important vision tasks, e.g, face recognition and 3D face reconstruction. Recently, there have been a few attempts to solve this problem, but still more research is needed to achieve highly accurate results. In this paper, we propose a face alignment method for large-pose face images, by combining the powerful cascaded CNN regressor method and 3DMM. We formulate the face alignment as a 3DMM fitting problem, where the camera projection matrix and 3D shape parameters are estimated by a cascade of CNN-based regressors. The dense 3D shape allows us to design pose-invariant appearance features for effective CNN learning. Extensive experiments are conducted on the challenging databases (AFLW and AFW), with comparison to the state of the art.",
"title": ""
},
{
"docid": "6936b03672c64798ca4be118809cc325",
"text": "We present a deep learning framework for accurate visual correspondences and demonstrate its effectiveness for both geometric and semantic matching, spanning across rigid motions to intra-class shape or appearance variations. In contrast to previous CNN-based approaches that optimize a surrogate patch similarity objective, we use deep metric learning to directly learn a feature space that preserves either geometric or semantic similarity. Our fully convolutional architecture, along with a novel correspondence contrastive loss allows faster training by effective reuse of computations, accurate gradient computation through the use of thousands of examples per image pair and faster testing with O(n) feedforward passes for n keypoints, instead of O(n) for typical patch similarity methods. We propose a convolutional spatial transformer to mimic patch normalization in traditional features like SIFT, which is shown to dramatically boost accuracy for semantic correspondences across intra-class shape variations. Extensive experiments on KITTI, PASCAL and CUB-2011 datasets demonstrate the significant advantages of our features over prior works that use either hand-constructed or learned features.",
"title": ""
},
{
"docid": "b7387928fe8307063cafd6723c0dd103",
"text": "We introduce learned attention models into the radio machine learning domain for the task of modulation recognition by leveraging spatial transformer networks and introducing new radio domain appropriate transformations. This attention model allows the network to learn a localization network capable of synchronizing and normalizing a radio signal blindly with zero knowledge of the signal's structure based on optimization of the network for classification accuracy, sparse representation, and regularization. Using this architecture we are able to outperform our prior results in accuracy vs signal to noise ratio against an identical system without attention, however we believe such an attention model has implication far beyond the task of modulation recognition.",
"title": ""
},
{
"docid": "4551ee1978ef563259c8da64cc0d1444",
"text": "We present an approach to matching images of objects in fine-grained datasets without using part annotations, with an application to the challenging problem of weakly supervised single-view reconstruction. This is in contrast to prior works that require part annotations, since matching objects across class and pose variations is challenging with appearance features alone. We overcome this challenge through a novel deep learning architecture, WarpNet, that aligns an object in one image with a different object in another. We exploit the structure of the fine-grained dataset to create artificial data for training this network in an unsupervised-discriminative learning approach. The output of the network acts as a spatial prior that allows generalization at test time to match real images across variations in appearance, viewpoint and articulation. On the CUB-200-2011 dataset of bird categories, we improve the AP over an appearance-only network by 13.6%. We further demonstrate that our WarpNet matches, together with the structure of fine-grained datasets, allow single-view reconstructions with quality comparable to using annotated point correspondences.",
"title": ""
}
] |
[
{
"docid": "39c2c3e7f955425cd9aaad1951d13483",
"text": "This paper proposes a novel nature-inspired algorithm called Multi-Verse Optimizer (MVO). The main inspirations of this algorithm are based on three concepts in cosmology: white hole, black hole, and wormhole. The mathematical models of these three concepts are developed to perform exploration, exploitation, and local search, respectively. The MVO algorithm is first benchmarked on 19 challenging test problems. It is then applied to five real engineering problems to further confirm its performance. To validate the results, MVO is compared with four well-known algorithms: Grey Wolf Optimizer, Particle Swarm Optimization, Genetic Algorithm, and Gravitational Search Algorithm. The results prove that the proposed algorithm is able to provide very competitive results and outperforms the best algorithms in the literature on the majority of the test beds. The results of the real case studies also demonstrate the potential of MVO in solving real problems with unknown search spaces. Note that the source codes of the proposed MVO algorithm are publicly available at http://www.alimirjalili.com/MVO.html .",
"title": ""
},
{
"docid": "1afa72a646fcfa5dfe632126014f59be",
"text": "The virulence factor database (VFDB, http://www.mgc.ac.cn/VFs/) has served as a comprehensive repository of bacterial virulence factors (VFs) for >7 years. Bacterial virulence is an exciting and dynamic field, due to the availability of complete sequences of bacterial genomes and increasing sophisticated technologies for manipulating bacteria and bacterial genomes. The intricacy of virulence mechanisms offers a challenge, and there exists a clear need to decipher the 'language' used by VFs more effectively. In this article, we present the recent major updates of VFDB in an attempt to summarize some of the most important virulence mechanisms by comparing different compositions and organizations of VFs from various bacterial pathogens, identifying core components and phylogenetic clades and shedding new light on the forces that shape the evolutionary history of bacterial pathogenesis. In addition, the 2012 release of VFDB provides an improved user interface.",
"title": ""
},
{
"docid": "fa03fe8103c69dbb8328db899400cce4",
"text": "While deploying large scale heterogeneous robots in a wide geographical area, communicating among robots and robots with a central entity pose a major challenge due to robotic motion, distance and environmental constraints. In a cloud robotics scenario, communication challenges result in computational challenges as the computation is being performed at the cloud. Therefore fog nodes are introduced which shorten the distance between the robots and cloud and reduce the communication challenges. Fog nodes also reduce the computation challenges with extra compute power. However in the above scenario, maintaining continuous communication between the cloud and the robots either directly or via fog nodes is difficult. Therefore we propose a Distributed Cooperative Multi-robots Communication (DCMC) model where Robot to Robot (R2R), Robot to Fog (R2F) and Fog to Cloud (F2C) communications are being realized. Once the DCMC framework is formed, each robot establishes communication paths to maintain a consistent communication with the cloud. Further, due to mobility and environmental condition, maintaining link with a particular robot or a fog node becomes difficult. This requires pre-knowledge of the link quality such that appropriate R2R or R2F communication can be made possible. In a scenario where Global Positioning System (GPS) and continuous scanning of channels are not advisable due to energy or security constraints, we need an accurate link prediction mechanism. In this paper we propose a Collaborative Robotic based Link Prediction (CRLP) mechanism which predicts reliable communication and quantify link quality evolution in R2R and R2F communications without GPS and continuous channel scanning. We have validated our proposed schemes using joint Gazebo/Robot Operating System (ROS), MATLAB and Network Simulator (NS3) based simulations. Our schemes are efficient in terms of energy saving and accurate link prediction.",
"title": ""
},
{
"docid": "95af5f635e876c4c66711e86fa25d968",
"text": "Markerless vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. The significant research effort in this domain has been motivated by the fact that many application areas, including surveillance, Human–Computer Interaction and automatic annotation, will benefit from a robust solution. In this paper, we discuss the characteristics of human motion analysis. We divide the analysis into a modeling and an estimation phase. Modeling is the construction of the likelihood function, estimation is concerned with finding the most likely pose given the likelihood surface. We discuss model-free approaches separately. This taxonomy allows us to highlight trends in the domain and to point out limitations of the current state of the art. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "83e7119065ededfd731855fe76e76207",
"text": "Introduction: In recent years, the maturity model research has gained wide acceptance in the area of information systems and many Service Oriented Architecture (SOA) maturity models have been proposed. However, there are limited empirical studies on in-depth analysis and validation of SOA Maturity Models (SOAMMs). Objectives: The objective is to present a comprehensive comparison of existing SOAMMs to identify the areas of improvement and the research opportunities. Methods: A systematic literature review is conducted to explore the SOA adoption maturity studies. Results: A total of 20 unique SOAMMs are identified and analyzed in detail. A comparison framework is defined based on SOAMM design and usage support. The results provide guidance for SOA practitioners who are involved in selection, design, and implementation of SOAMMs. Conclusion: Although all SOAMMs propose a measurement framework, only a few SOAMMs provide guidance for selecting and prioritizing improvement measures. The current state of research shows that a gap exists in both prescriptive and descriptive purpose of SOAMM usage and it indicates the need for further research.",
"title": ""
},
{
"docid": "936048690fb043434c3ee0060c5bf7a5",
"text": "This paper asks whether case-based reasoning is an artificial intelligence (AI) technology like rule-based reasoning, neural networks or genetic algorithms or whether it is better described as a methodology for problem solving, that may use any appropriate technology. By describing four applications of case-based reasoning (CBR), that variously use: nearest neighbour, induction, fuzzy logic and SQL, the author shows that CBR is a methodology and not a technology. The implications of this are discussed. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "eef87d8905b621d2d0bb2b66108a56c1",
"text": "We study deep learning approaches to inferring numerical coordinates for points of interest in an input image. Existing convolutional neural network-based solutions to this problem either take a heatmap matching approach or regress to coordinates with a fully connected output layer. Neither of these approaches is ideal, since the former is not entirely differentiable, and the latter lacks inherent spatial generalization. We propose our differentiable spatial to numerical transform (DSNT) to fill this gap. The DSNT layer adds no trainable parameters, is fully differentiable, and exhibits good spatial generalization. Unlike heatmap matching, DSNT works well with low heatmap resolutions, so it can be dropped in as an output layer for a wide range of existing fully convolutional architectures. Consequently, DSNT offers a better trade-off between inference speed and prediction accuracy compared to existing techniques. When used to replace the popular heatmap matching approach used in almost all state-of-the-art methods for pose estimation, DSNT gives better prediction accuracy for all model architectures tested.",
"title": ""
},
{
"docid": "2d73a7ab1e5a784d4755ed2fe44078db",
"text": "Over the last years, many papers have been published about how to use machine learning for classifying postings on microblogging platforms like Twitter, e.g., in order to assist users to reach tweets that interest them. Typically, the automatic classification results are then evaluated against a gold standard classification which consists of either (i) the hashtags of the tweets' authors, or (ii) manual annotations of independent human annotators. In this paper, we show that there are fundamental differences between these two kinds of gold standard classifications, i.e., human annotators are more likely to classify tweets like other human annotators than like the tweets' authors. Furthermore, we discuss how these differences may influence the evaluation of automatic classifications, like they may be achieved by Latent Dirichlet Allocation (LDA). We argue that researchers who conduct machine learning experiments for tweet classification should pay particular attention to the kind of gold standard they use. One may even argue that hashtags are not appropriate as a gold standard for tweet classification.",
"title": ""
},
{
"docid": "18caf39ce8802f69a463cc1a4b276679",
"text": "In this thesis we describe the formal verification of a fully IEEE compliant floating point unit (FPU). The hardware is verified on the gate-level against a formalization of the IEEE standard. The verification is performed using the theorem proving system PVS. The FPU supports both single and double precision floating point numbers, normal and denormal numbers, all four IEEE rounding modes, and exceptions as required by the standard. Beside the verification of the combinatorial correctness of the FPUs we pipeline the FPUs to allow the integration into an out-of-order processor. We formally define the correctness criterion the pipelines must obey in order to work properly within the processor. We then describe a new methodology based on combining model checking and theorem proving for the verification of the pipelines.",
"title": ""
},
{
"docid": "9fc869c7e7d901e418b1b69d636cbd33",
"text": "Selecting optimal parameters for a neural network architecture can often make the difference between mediocre and state-of-the-art performance. However, little is published which parameters and design choices should be evaluated or selected making the correct hyperparameter optimization often a “black art that requires expert experiences” (Snoek et al., 2012). In this paper, we evaluate the importance of different network design choices and hyperparameters for five common linguistic sequence tagging tasks (POS, Chunking, NER, Entity Recognition, and Event Detection). We evaluated over 50.000 different setups and found, that some parameters, like the pre-trained word embeddings or the last layer of the network, have a large impact on the performance, while other parameters, for example the number of LSTM layers or the number of recurrent units, are of minor importance. We give a recommendation on a configuration that performs well among different tasks. The optimized implementation of our BiLSTM-CRF architecture is publicly available.1 This publication explains in detail the experimental setup and discusses the results. A condensed version of this paper was presented at EMNLP 2017 (Reimers and Gurevych, 2017).2",
"title": ""
},
{
"docid": "9f660caf74f1708339f7ca2ee067dc95",
"text": "Abstruct-Vehicle following and its effects on traffic flow has been an active area of research. Human driving involves reaction times, delays, and human errors that affect traffic flow adversely. One way to eliminate human errors and delays in vehicle following is to replace the human driver with a computer control system and sensors. The purpose of this paper is to develop an autonomous intelligent cruise control (AICC) system for automatic vehicle following, examine its effect on traffic flow, and compare its performance with that of the human driver models. The AICC system developed is not cooperative; Le., it does not exchange information with other vehicles and yet is not susceptible to oscillations and \" slinky \" effects. The elimination of the \" slinky \" effect is achieved by using a safety distance separation rule that is proportional to the vehicle velocity (constant time headway) and by designing the control system appropriately. The performance of the AICC system is found to be superior to that of the human driver models considered. It has a faster and better transient response that leads to a much smoother and faster traffic flow. Computer simulations are used to study the performance of the proposed AICC system and analyze vehicle following in a single lane, without passing, under manual and automatic control. In addition, several emergency situations that include emergency stopping and cut-in cases were simulated. The simulation results demonstrate the effectiveness of the AICC system and its potentially beneficial effects on traffic flow.",
"title": ""
},
{
"docid": "6ced60cadf69a3cd73bcfd6a3eb7705e",
"text": "This review article summarizes the current literature regarding the analysis of running gait. It is compared to walking and sprinting. The current state of knowledge is presented as it fits in the context of the history of analysis of movement. The characteristics of the gait cycle and its relationship to potential and kinetic energy interactions are reviewed. The timing of electromyographic activity is provided. Kinematic and kinetic data (including center of pressure measurements, raw force plate data, joint moments, and joint powers) and the impact of changes in velocity on these findings is presented. The status of shoewear literature, alterations in movement strategies, the role of biarticular muscles, and the springlike function of tendons are addressed. This type of information can provide insight into injury mechanisms and training strategies. Copyright 1998 Elsevier Science B.V.",
"title": ""
},
{
"docid": "842cd58edd776420db869e858be07de4",
"text": "A nationwide interoperable public safety wireless broadband network is being planned by the First Responder Network Authority (FirstNet) under the auspices of the United States government. The public safety network shall provide the needed wireless coverage in the wake of an incident or a disaster. This paper proposes a drone-assisted multi-hop device-to-device (D2D) communication scheme as a means to extend the network coverage over regions where it is difficult to deploy a landbased relay. The resource are shared using either time division or frequency division scheme. Efficient algorithms are developed to compute the optimal position of the drone for maximizing the data rate, which are shown to be highly effective via simulations.",
"title": ""
},
{
"docid": "0aa566453fa3bd4bedec5ac3249d410a",
"text": "The approach of using passage-level evidence for document retrieval has shown mixed results when it is applied to a variety of test beds with different characteristics. One main reason of the inconsistent performance is that there exists no unified framework to model the evidence of individual passages within a document. This paper proposes two probabilistic models to formally model the evidence of a set of top ranked passages in a document. The first probabilistic model follows the retrieval criterion that a document is relevant if any passage in the document is relevant, and models each passage independently. The second probabilistic model goes a step further and incorporates the similarity correlations among the passages. Both models are trained in a discriminative manner. Furthermore, we present a combination approach to combine the ranked lists of document retrieval and passage-based retrieval.\n An extensive set of experiments have been conducted on four different TREC test beds to show the effectiveness of the proposed discriminative probabilistic models for passage-based retrieval. The proposed algorithms are compared with a state-of-the-art document retrieval algorithm and a language model approach for passage-based retrieval. Furthermore, our combined approach has been shown to provide better results than both document retrieval and passage-based retrieval approaches.",
"title": ""
},
{
"docid": "5aaba72970d1d055768e981f7e8e3684",
"text": "A hash table is a fundamental data structure in computer science that can offer rapid storage and retrieval of data. A leading implementation for string keys is the cacheconscious array hash table. Although fast with strings, there is currently no information in the research literatur e on its performance with integer keys. More importantly, we do not know how efficient an integer-based array hash table is compared to other hash tables that are designed for integers, such as bucketized cuckoo hashing. In this paper, we explain how to efficiently implement an array hash table for integers. We then demonstrate, through careful experimental evaluations, which hash table, whether it be a bucketized cuckoo hash table, an array hash table, or alternative hash table schemes such as linear probing, offers the best performance—with respect to time and space— for maintaining a large dictionary of integers in-memory, on a current cache-oriented processor.",
"title": ""
},
{
"docid": "69ddedba98e93523f698529716cf2569",
"text": "A fast and scalable graph processing method becomes increasingly important as graphs become popular in a wide range of applications and their sizes are growing rapidly. Most of distributed graph processing methods require a lot of machines equipped with a total of thousands of CPU cores and a few terabyte main memory for handling billion-scale graphs. Meanwhile, GPUs could be a promising direction toward fast processing of large-scale graphs by exploiting thousands of GPU cores. All of the existing methods using GPUs, however, fail to process large-scale graphs that do not fit in main memory of a single machine. Here, we propose a fast and scalable graph processing method GTS that handles even RMAT32 (64 billion edges) very efficiently only by using a single machine. The proposed method stores graphs in PCI-E SSDs and executes a graph algorithm using thousands of GPU cores while streaming topology data of graphs to GPUs via PCI-E interface. GTS is fast due to no communication overhead and scalable due to no data duplication from graph partitioning among machines. Through extensive experiments, we show that GTS consistently and significantly outperforms the major distributed graph processing methods, GraphX, Giraph, and PowerGraph, and the state-of-the-art GPU-based method TOTEM.",
"title": ""
},
{
"docid": "89b54aa0009598a4cb159b196f3749ee",
"text": "Several methods and techniques are potentially useful for the preparation of microparticles in the field of controlled drug delivery. The type and the size of the microparticles, the entrapment, release characteristics and stability of drug in microparticles in the formulations are dependent on the method used. One of the most common methods of preparing microparticles is the single emulsion technique. Poorly soluble, lipophilic drugs are successfully retained within the microparticles prepared by this method. However, the encapsulation of highly water soluble compounds including protein and peptides presents formidable challenges to the researchers. The successful encapsulation of such compounds requires high drug loading in the microparticles, prevention of protein and peptide degradation by the encapsulation method involved and predictable release, both rate and extent, of the drug compound from the microparticles. The above mentioned problems can be overcome by using the double emulsion technique, alternatively called as multiple emulsion technique. Aiming to achieve this various techniques have been examined to prepare stable formulations utilizing w/o/w, s/o/w, w/o/o, and s/o/o type double emulsion methods. This article reviews the current state of the art in double emulsion based technologies for the preparation of microparticles including the investigation of various classes of substances that are pharmaceutically and biopharmaceutically active.",
"title": ""
},
{
"docid": "ad4596e24f157653a36201767d4b4f3b",
"text": "We present a character-based model for joint segmentation and POS tagging for Chinese. The bidirectional RNN-CRF architecture for general sequence tagging is adapted and applied with novel vector representations of Chinese characters that capture rich contextual information and lower-than-character level features. The proposed model is extensively evaluated and compared with a state-of-the-art tagger respectively on CTB5, CTB9 and UD Chinese. The experimental results indicate that our model is accurate and robust across datasets in different sizes, genres and annotation schemes. We obtain stateof-the-art performance on CTB5, achieving 94.38 F1-score for joint segmentation and POS tagging.",
"title": ""
},
{
"docid": "708915f99102f80b026b447f858e3778",
"text": "One of the main obstacles to broad application of reinforcement learning methods is the parameter sensitivity of our core learning algorithms. In many large-scale applications, online computation and function approximation represent key strategies in scaling up reinforcement learning algorithms. In this setting, we have effective and reasonably well understood algorithms for adapting the learning-rate parameter, online during learning. Such meta-learning approaches can improve robustness of learning and enable specialization to current task, improving learning speed. For temporaldifference learning algorithms which we study here, there is yet another parameter, λ, that similarly impacts learning speed and stability in practice. Unfortunately, unlike the learning-rate parameter, λ parametrizes the objective function that temporal-difference methods optimize. Different choices of λ produce different fixed-point solutions, and thus adapting λ online and characterizing the optimization is substantially more complex than adapting the learningrate parameter. There are no meta-learning method for λ that can achieve (1) incremental updating, (2) compatibility with function approximation, and (3) maintain stability of learning under both on and off-policy sampling. In this paper we contribute a novel objective function for optimizing λ as a function of state rather than time. We derive a new incremental, linear complexity λ-adaption algorithm that does not require offline batch updating or access to a model of the world, and present a suite of experiments illustrating the practicality of our new algorithm in three different settings. Taken together, our contributions represent a concrete step towards black-box application of temporaldifference learning methods in real world problems.",
"title": ""
},
{
"docid": "021bed3f2c2f09db1bad7d11108ee430",
"text": "This is a review of Introduction to Circle Packing: The Theory of Discrete Analytic Functions, by Kenneth Stephenson, Cambridge University Press, Cambridge UK, 2005, pp. i-xii, 1–356, £42, ISBN-13 978-0-521-82356-2. 1. The Context: A Personal Reminiscence Two important stories in the recent history of mathematics are those of the geometrization of topology and the discretization of geometry. Having come of age during the unfolding of these stories as both observer and practitioner, this reviewer does not hold the detachment of the historian and, perhaps, can be forgiven the personal accounting that follows, along with its idiosyncratic telling. The first story begins at a time when the mathematical world is entrapped by abstraction. Bourbaki reigns and generalization is the cry of the day. Coxeter is a curious doddering uncle, at best tolerated, at worst vilified as a practitioner of the unsophisticated mathematics of the nineteenth century. 1.1. The geometrization of topology. It is 1978 and I have just begun my graduate studies in mathematics. There is some excitement in the air over ideas of Bill Thurston that purport to offer a way to resolve the Poincaré conjecture by using nineteenth century mathematics—specifically, the noneuclidean geometry of Lobachevski and Bolyai—to classify all 3-manifolds. These ideas finally appear in a set of notes from Princeton a couple of years later, and the notes are both fascinating and infuriating—theorems are left unstated and often unproved, chapters are missing never to be seen, the particular dominates—but the notes are bulging with beautiful and exciting ideas, often with but sketches of intricate arguments to support the landscape that Thurston sees as he surveys the topology of 3-manifolds. Thurston’s vision is a throwback to the previous century, having much in common with the highly geometric, highly particular landscape that inspired Felix Klein and Max Dehn. These geometers walked around and within Riemann surfaces, one of the hot topics of the day, knew them intimately, and understood them in their particularity, not from the rarified heights that captured the mathematical world in general, and topology in particular, in the period from the 1930’s until the 1970’s. The influence of Thurston’s Princeton notes on the development of topology over the next 30 years would be pervasive, not only in its mathematical content, but AMS SUBJECT CLASSIFICATION: 52C26",
"title": ""
}
] |
scidocsrr
|
5cebfafaaa63c1b9b55d705f5fbc5de4
|
SIW periodic leaky wave antenna with improved H-plane radiation pattern using baffles
|
[
{
"docid": "fcf5a390d9757ab3c8958638ccc54925",
"text": "This paper presents design equations for the microstrip-to-Substrate Integrated Waveguide (SIW) transition. The transition is decomposed in two distinct parts: the microstrip taper and the microstrip-to-SIW step. Analytical equations are used for the microstrip taper. As for the step, the microstrip is modeled by an equivalent transverse electromagnetic (TEM) waveguide. An equation relating the optimum microstrip width to the SIW width is derived using a curve fitting technique. It is shown that when the step is properly sized, it provides a return loss superior to 20 dB. Three design examples are presented using different substrate permittivity and frequency bands between 18 GHz and 75 GHz. An experimental verification is also presented. The presented technique allows to design transitions covering the complete single-mode SIW bandwidth.",
"title": ""
}
] |
[
{
"docid": "17a1de2e932b17fd3c787baa456219b6",
"text": "With the rise of massive open online courses (MOOCs), tens of millions of learners can now enroll in more than 1,000 courses via MOOC platforms such as Coursera and edX. As a result, a huge amount of data has been collected. Compared with traditional education records, the data from MOOCs has much finer granularity and also contains new pieces of information. It is the first time in history that such comprehensive data related to learning behavior has become available for analysis. What roles can visual analytics play in this MOOC movement? The authors survey the current practice and argue that MOOCs provide an opportunity for visualization researchers and that visual analytics systems for MOOCs can benefit a range of end users such as course instructors, education researchers, students, university administrators, and MOOC providers.",
"title": ""
},
{
"docid": "4ebdfc3fe891f11902fb94973b6be582",
"text": "This work introduces the CASCADE error correction protocol and LDPC (Low-Density Parity Check) error correction codes which are both parity check based. We also give the results of computer simulations that are performed for comparing their performances (redundant information, success).",
"title": ""
},
{
"docid": "061face2272a6c5a31c6fca850790930",
"text": "Antibiotic feeding studies were conducted on the firebrat,Thermobia domestica (Zygentoma, Lepismatidae) to determine if the insect's gut cellulases were of insect or microbial origin. Firebrats were fed diets containing either nystatin, metronidazole, streptomycin, tetracycline, or an antibiotic cocktail consisting of all four antibiotics, and then their gut microbial populations and gut cellulase levels were monitored and compared with the gut microbial populations and gut cellulase levels in firebrats feeding on antibiotic-free diets. Each antibiotic significantly reduced the firebrat's gut micro-flora. Nystatin reduced the firebrat's viable gut fungi by 89%. Tetracycline and the antibiotic cocktail reduced the firebrat's viable gut bacteria by 81% and 67%, respectively, and metronidazole, streptomycin, tetracycline, and the antibiotic cocktail reduced the firebrat's total gut flora by 35%, 32%, 55%, and 64%, respectively. Although antibiotics significantly reduced the firebrat's viable and total gut flora, gut cellulase levels in firebrats fed antibiotics were not significantly different from those in firebrats on an antibiotic-free diet. Furthermore, microbial populations in the firebrat's gut decreased significantly over time, even in firebrats feeding on the antibiotic-free diet, without corresponding decreases in gut cellulase levels. Based on this evidence, we conclude that the gut cellulases of firebrats are of insect origin. This conclusion implies that symbiont-independent cellulose digestion is a primitive trait in insects and that symbiont-mediated cellulose digestion is a derived condition.",
"title": ""
},
{
"docid": "446c1bf541dbed56f8321b8024391b8c",
"text": "Tokenisation has been adopted by the payment industry as a method to prevent Personal Account Number (PAN) compromise in EMV (Europay MasterCard Visa) transactions. The current architecture specified in EMV tokenisation requires online connectivity during transactions. However, it is not always possible to have online connectivity. We identify three main scenarios where fully offline transaction capability is considered to be beneficial for both merchants and consumers. Scenarios include making purchases in locations without online connectivity, when a reliable connection is not guaranteed, and when it is cheaper to carry out offline transactions due to higher communication/payment processing costs involved in online approvals. In this study, an offline contactless mobile payment protocol based on EMV tokenisation is proposed. The aim of the protocol is to address the challenge of providing secure offline transaction capability when there is no online connectivity on either the mobile or the terminal. The solution also provides end-to-end encryption to provide additional security for transaction data other than the token. The protocol is analysed against protocol objectives and we discuss how the protocol can be extended to prevent token relay attacks. The proposed solution is subjected to mechanical formal analysis using Scyther. Finally, we implement the protocol and obtain performance measurements.",
"title": ""
},
{
"docid": "b4103e5ddc58672334b66cc504dab5a6",
"text": "An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as DUPLICATE and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone.",
"title": ""
},
{
"docid": "6ed1132aa216e15fe54e8524c9a4f8ee",
"text": "CONTEXT\nWith ageing populations, the prevalence of dementia, especially Alzheimer's disease, is set to soar. Alzheimer's disease is associated with progressive cerebral atrophy, which can be seen on MRI with high resolution. Longitudinal MRI could track disease progression and detect neurodegenerative diseases earlier to allow prompt and specific treatment. Such use of MRI requires accurate understanding of how brain changes in normal ageing differ from those in dementia.\n\n\nSTARTING POINT\nRecently, Henry Rusinek and colleagues, in a 6-year longitudinal MRI study of initially healthy elderly subjects, showed that an increased rate of atrophy in the medial temporal lobe predicted future cognitive decline with a specificity of 91% and sensitivity of 89% (Radiology 2003; 229: 691-96). WHERE NEXT? As understanding of neurodegenerative diseases increases, specific disease-modifying treatments might become available. Serial MRI could help to determine the efficacy of such treatments, which would be expected to slow the rate of atrophy towards that of normal ageing, and might also detect the onset of neurodegeneration. The amount and pattern of excess atrophy might help to predict the underlying pathological process, allowing specific therapies to be started. As the precision of imaging improves, the ability to distinguish healthy ageing from degenerative dementia should improve.",
"title": ""
},
{
"docid": "a40d11652a42ac6a6bf4368c9665fb3b",
"text": "This paper presents a taxonomy of intrusion detection systems that is then used to survey and classify a number of research prototypes. The taxonomy consists of a classification first of the detection principle, and second of certain operational aspects of the intrusion detection system as such. The systems are also grouped according to the increasing difficulty of the problem they attempt to address. These classifications are used predictively, pointing towards a number of areas of future research in the field of intrusion detection.",
"title": ""
},
{
"docid": "00309e5119bb0de1d7b2a583b8487733",
"text": "In this paper, we propose a novel Deep Reinforcement Learning framework for news recommendation. Online personalized news recommendation is a highly challenging problem due to the dynamic nature of news features and user preferences. Although some online recommendation models have been proposed to address the dynamic nature of news recommendation, these methods have three major issues. First, they only try to model current reward (e.g., Click Through Rate). Second, very few studies consider to use user feedback other than click / no click labels (e.g., how frequent user returns) to help improve recommendation. Third, these methods tend to keep recommending similar news to users, which may cause users to get bored. Therefore, to address the aforementioned challenges, we propose a Deep Q-Learning based recommendation framework, which can model future reward explicitly. We further consider user return pattern as a supplement to click / no click label in order to capture more user feedback information. In addition, an effective exploration strategy is incorporated to find new attractive news for users. Extensive experiments are conducted on the offline dataset and online production environment of a commercial news recommendation application and have shown the superior performance of our methods.",
"title": ""
},
{
"docid": "b6b9e1eaf17f6cdbc9c060e467021811",
"text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.",
"title": ""
},
{
"docid": "e4dbca720626a29f60a31ed9d22c30aa",
"text": "Text classification is the process of classifying documents into predefined categories based on their content. It is the automated assignment of natural language texts to predefined categories. Text classification is the primary requirement of text retrieval systems, which retrieve texts in response to a user query, and text understanding systems, which transform text in some way such as producing summaries, answering questions or extracting data. Existing supervised learning algorithms to automatically classify text need sufficient documents to learn accurately. This paper presents a new algorithm for text classification using data mining that requires fewer documents for training. Instead of using words, word relation i.e. association rules from these words is used to derive feature set from pre-classified text documents. The concept of Naïve Bayes classifier is then used on derived features and finally only a single concept of Genetic Algorithm has been added for final classification. A system based on the proposed algorithm has been implemented and tested. The experimental results show that the proposed system works as a successful text classifier.",
"title": ""
},
{
"docid": "63c6c060e398ffaf7203edd30951f574",
"text": "Mycorrhizal networks, defined as a common mycorrhizal mycelium linking the roots of at least two plants, occur in all major terrestrial ecosystems. This review discusses the recent progress and challenges in our understanding of the characteristics, functions, ecology and models of mycorrhizal networks, with the goal of encouraging future research to improve our understanding of their ecology, adaptability and evolution. We focus on four themes in the recent literature: (1) the physical, physiological and molecular evidence for the existence of mycorrhizal networks, as well as the genetic characteristics and topology of networks in natural ecosystems; (2) the types, amounts and mechanisms of interplant material transfer (including carbon, nutrients, water, defence signals and allelochemicals) in autotrophic, mycoheterotrophic or partial mycoheterotrophic plants, with particular focus on carbon transfer; (3) the influence of mycorrhizal networks on plant establishment, survival and growth, and the implications for community diversity or stability in response to environmental stress; and (4) insights into emerging methods for modelling the spatial configuration and temporal dynamics of mycorrhizal networks, including the inclusion of mycorrhizal networks in conceptual models of complex adaptive systems. We suggest that mycorrhizal networks are fundamental agents of complex adaptive systems (ecosystems) because they provide avenues for feedbacks and cross-scale interactions that lead to selforganization and emergent properties in ecosystems. We have found that research in the genetics of mycorrhizal networks has accelerated rapidly in the past 5 y with increasing resolution and throughput of molecular tools, but there still remains a large gap between understanding genes and understanding the physiology, ecology and evolution of mycorrhizal networks in our changing environment. There is now enormous and exciting potential for mycorrhizal researchers to address these higher level questions and thus inform ecosystem and evolutionary research more broadly. a 2012 The British Mycological Society. Published by Elsevier Ltd. All rights reserved. 5; fax: þ1 604 822 9102. ca (S. W. Simard), [email protected] (K. J. Beiler), [email protected] ch.co.nz (J. R. Deslippe), [email protected] (L. J. Philip), [email protected] ritish Mycological Society. Published by Elsevier Ltd. All rights reserved. 40 S. W. Simard et al.",
"title": ""
},
{
"docid": "3ea35f018869f02209105200f78d03b4",
"text": "We address the problem of spectrum pricing in a cognitive radio network where multiple primary service providers compete with each other to offer spectrum access opportunities to the secondary users. By using an equilibrium pricing scheme, each of the primary service providers aims to maximize its profit under quality of service (QoS) constraint for primary users. We formulate this situation as an oligopoly market consisting of a few firms and a consumer. The QoS degradation of the primary services is considered as the cost in offering spectrum access to the secondary users. For the secondary users, we adopt a utility function to obtain the demand function. With a Bertrand game model, we analyze the impacts of several system parameters such as spectrum substitutability and channel quality on the Nash equilibrium (i.e., equilibrium pricing adopted by the primary services). We present distributed algorithms to obtain the solution for this dynamic game. The stability of the proposed dynamic game algorithms in terms of convergence to the Nash equilibrium is studied. However, the Nash equilibrium is not efficient in the sense that the total profit of the primary service providers is not maximized. An optimal solution to gain the highest total profit can be obtained. A collusion can be established among the primary services so that they gain higher profit than that for the Nash equilibrium. However, since one or more of the primary service providers may deviate from the optimal solution, a punishment mechanism may be applied to the deviating primary service provider. A repeated game among primary service providers is formulated to show that the collusion can be maintained if all of the primary service providers are aware of this punishment mechanism, and therefore, properly weight their profits to be obtained in the future.",
"title": ""
},
{
"docid": "eae289c213d5b67d91bb0f461edae7af",
"text": "China has made remarkable progress in its war against poverty since the launching of economic reform in the late 1970s. This paper examines some of the major driving forces of poverty reduction in China. Based on time series and cross-sectional provincial data, the determinants of rural poverty incidence are estimated. The results show that economic growth is an essential and necessary condition for nationwide poverty reduction. It is not, however, a sufficient condition. While economic growth played a dominant role in reducing poverty through the mid-1990s, its impacts has diminished since that time. Beyond general economic growth, growth in specific sectors of the economy is also found to reduce poverty. For example, the growth the agricultural sector and other pro-rural (vs urban-biased) development efforts can also have significant impacts on rural poverty. Notwithstanding the record of the past, our paper is consistent with the idea that poverty reduction in the future will need to rely on more than broad-based growth and instead be dependent on pro-poor policy interventions (such as national poverty alleviation programs) that can be targeted at the poor, trying to directly help the poor to increase their human capital and incomes. Determinants of Rural Poverty Reduction and Pro-poor Economic Growth in China",
"title": ""
},
{
"docid": "748b470bfbd62b5ddf747e3ef989e66d",
"text": "Purpose – This paper sets out to integrate research on knowledge management with the dynamic capabilities approach. This paper will add to the understanding of dynamic capabilities by demonstrating that dynamic capabilities can be seen as composed of concrete and well-known knowledge management activities. Design/methodology/approach – This paper is based on a literature review focusing on key knowledge management processes and activities as well as the concept of dynamic capabilities, the paper connects these two approaches. The analysis is centered on knowledge management activities which then are compiled into dynamic capabilities. Findings – In the paper eight knowledge management activities are identified; knowledge creation, acquisition, capture, assembly, sharing, integration, leverage, and exploitation. These activities are assembled into the three dynamic capabilities of knowledge development, knowledge (re)combination, and knowledge use. The dynamic capabilities and the associated knowledge management activities create flows to and from the firm’s stock of knowledge and they support the creation and use of organizational capabilities. Practical implications – The findings in the paper demonstrate that the somewhat elusive concept of dynamic capabilities can be untangled through the use of knowledge management activities. Practicing managers struggling with the operationalization of dynamic capabilities should instead focus on the contributing knowledge management activities in order to operationalize and utilize the concept of dynamic capabilities. Originality/value – The paper demonstrates that the existing research on knowledge management can be a key contributor to increasing our understanding of dynamic capabilities. This finding is valuable for both researchers and practitioners.",
"title": ""
},
{
"docid": "641049f7bdf194b3c326298c5679c469",
"text": "Acknowledgements Research in areas where there are many possible paths to follow requires a keen eye for crucial issues. The study of learning systems is such an area. Through the years of working with Andy Barto and Rich Sutton, I have observed many instances of \" fluff cutting \" and the exposure of basic issues. I thank both Andy and Rich for the insights that have rubbed off on me. I also thank Andy for opening up an infinite world of perspectives on learning, ranging from engineering principles to neural processing theories. I thank Rich for showing me the most important step in doing \" science \" —simplify your questions by isolating the issues. Several people contributed to the readability of this dissertation. Andy spent much time carefully reading several drafts. Through his efforts the clarity is much improved. I thank Paul Utgoff, Michael Arbib, and Bill Kilmer for reading drafts of this dissertation and providing valuable criticisms. Paul provided a non-connectionist perspective that widened my view considerably. He never hesitated to work out differences in terms and methodologies that have been developed through research with connectionist vs. symbolic representations. I thank for commenting on an early draft and for many interesting discussions. and the AFOSR for starting and maintaining the research project that supported the work reported in this dis-sertation. I thank Susan Parker for the skill with which she administered the project. And I thank the COINS Department at UMass and the RCF Staff for the maintenance of the research computing environment. Much of the computer graphics software used to generate figures of this dissertation is based on graphics tools provided by Rich Sutton and Andy Cromarty. Most importantly, I thank Stacey and Joseph for always being there to lift my spirits while I pursued distant milestones and to share my excitement upon reaching them. Their faith and confidence helped me maintain a proper perspective. The difficulties of learning in multilayered networks of computational units has limited the use of connectionist systems in complex domains. This dissertation elucidates the issues of learning in a network's hidden units, and reviews methods for addressing these issues that have been developed through the years. Issues of learning in hidden units are shown to be analogous to learning issues for multilayer systems employing symbolic representations. Comparisons of a number of algorithms for learning in hidden units are made by applying them in …",
"title": ""
},
{
"docid": "df2bc3dce076e3736a195384ae6c9902",
"text": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.",
"title": ""
},
{
"docid": "8a6a5f02a399865afbbad607fd720d00",
"text": "Estimating entropy and mutual information consistently is important for many machine learning applications. The Kozachenko-Leonenko (KL) estimator ( Kozachenko & Leonenko , 1987) is a widely used nonparametric estimator for the entropy of multivariate continuous random variables, as well as the basis of the mutual information estimator ofKraskov et al.(2004), perhaps the most widely used estimator of mutual information in this setting. Despite the practical importance of these estimators, major theoretical questions regarding their finite-sample behavior remain open. This paper proves finite-sample bounds on the bias and variance of the KL estimator, showing that it achieves the minimax convergence rate for certain classes of smooth functions. In proving these bounds, we analyze finitesample behavior of k-nearest neighbors ( k-NN) distance statistics (on which the KL estimator is based). We derive concentration inequalities for k-NN distances and a general expectation bound for statistics ofk-NN distances, which may be useful for other analyses of k-NN methods.",
"title": ""
},
{
"docid": "b402c0f2ec478cddaf202c2cfa09d966",
"text": "This paper describes a framework for building story traces (compact global views of a narrative) and story projections (selections of key story elements) and their applications in digital storytelling. Word and sense properties are extracted using the WordNet lexical database enhanced with Prolog inference rules and a number of lexical transformations. Inference rules are based on navigation in various WordNet relation chains (hypernyms, meronyms, entailment and causality links, etc.) and derived inferential closures expressed as boolean combinations of node and edge properties used to direct the navigation. The resulting abstract story traces provide a compact view of the underlying story’s key content elements and a means for automated indexing and classification of story collections [1, 2]. Ontology driven projections act as a kind of “semantic lenses” and provide a means to select a subset of a story whose key sense elements are subsumed by a set of concepts, predicates and properties expressing the focus of interest of a user. Finally, we discuss applications of these techniques in story understanding, classification of digital story collections, story generation and story-related question answering. The main contribution of the paper consists in the use of a lexical knowledge base together with an advanced rule based inference mechanism for understanding stories, and the use of the information extracted by this process for various applications.",
"title": ""
},
{
"docid": "b58c11596d8364108a9d887382237c01",
"text": "This paper discusses the phenomenon of root infinitives (RIs) in child language, focussing on a distributional restriction on the verbs that occur in this construction, viz. event-denoting verbs, as well as on a related aspect of interpretation, viz. that RIs receive modal interpretations. The modality of the construction is traced to the infinitival morphology, while the eventivity restriction is derived from the modal meaning. In contrast, the English bare form, which is often taken to instantiate the RI-phenomenon, does not seem to be subject to the eventivity constraint, nor do we find a modal reference effect. This confirms the analysis, which traces these to the infinitival morphology itself, which is absent in English. The approach not only provides a precise characterization of the distribution of the RI-phenomenon within and across languages; it also explains differences between the English bare form phenomenon and the RI-construction in languages with genuine infinitives by reference to the morphosyntax of the languages involved. The fact that children appear to be sensitive to these distinctions in the target systems at such an early age supports the general thesis of Early Morphosyntactic Convergence, which the authors argue is a pervasive property of the acquisition process. Keywords; Syntax; Acquisition; Root infinitives; Eventivity; Modality",
"title": ""
}
] |
scidocsrr
|
787da3f7146426cf32845800fc92d6b7
|
Training on multiple sub-flows to optimise the use of Machine Learning classifiers in real-world IP networks
|
[
{
"docid": "1f0c842e4e2158daa586d9ee46a0d52a",
"text": "The ability to accurately identify the network traffic associated with different P2P applications is important to a broad range of network operations including application-specific traffic engineering, capacity planning, provisioning, service differentiation,etc. However, traditional traffic to higher-level application mapping techniques such as default server TCP or UDP network-port baseddisambiguation is highly inaccurate for some P2P applications.In this paper, we provide an efficient approach for identifying the P2P application traffic through application level signatures. We firstidentify the application level signatures by examining some available documentations, and packet-level traces. We then utilize the identified signatures to develop online filters that can efficiently and accurately track the P2P traffic even on high-speed network links.We examine the performance of our application-level identification approach using five popular P2P protocols. Our measurements show thatour technique achieves less than 5% false positive and false negative ratios in most cases. We also show that our approach only requires the examination of the very first few packets (less than 10packets) to identify a P2P connection, which makes our approach highly scalable. Our technique can significantly improve the P2P traffic volume estimates over what pure network port based approaches provide. For instance, we were able to identify 3 times as much traffic for the popular Kazaa P2P protocol, compared to the traditional port-based approach.",
"title": ""
},
{
"docid": "95310634132ddca70bc1683931a71e42",
"text": "The early detection of applications associated with TCP flows is an essential step for network security and traffic engineering. The classic way to identify flows, i.e. looking at port numbers, is not effective anymore. On the other hand, state-of-the-art techniques cannot determine the application before the end of the TCP flow. In this editorial, we propose a technique that relies on the observation of the first five packets of a TCP connection to identify the application. This result opens a range of new possibilities for online traffic classification.",
"title": ""
}
] |
[
{
"docid": "b68da205eb9bf4a6367250c6f04d2ad4",
"text": "Trends change rapidly in today’s world, prompting this key question: What is the mechanism behind the emergence of new trends? By representing real-world dynamic systems as complex networks, the emergence of new trends can be symbolized by vertices that “shine.” That is, at a specific time interval in a network’s life, certain vertices become increasingly connected to other vertices. This process creates new high-degree vertices, i.e., network stars. Thus, to study trends, we must look at how networks evolve over time and determine how the stars behave. In our research, we constructed the largest publicly available network evolution dataset to date, which contains 38,000 real-world networks and 2.5 million graphs. Then, we performed the first precise wide-scale analysis of the evolution of networks with various scales. Three primary observations resulted: (a) links are most prevalent among vertices that join a network at a similar time; (b) the rate that new vertices join a network is a central factor in molding a network’s topology; and (c) the emergence of network stars (high-degree vertices) is correlated with fast-growing networks. We applied our learnings to develop a flexible network-generation model based on large-scale, real-world data. This model gives a better understanding of how stars rise and fall within networks, and is applicable to dynamic systems both in nature and society. Multimedia Links I Video I Interactive Data Visualization I Data I Code Tutorials",
"title": ""
},
{
"docid": "2a12af091b7c9e0cc4c63d655d03666e",
"text": "A ll around the world in matters of governance, decentralization is the rage. Even apart from the widely debated issues of subsidiarity and devolution in the European Union and states’ rights in the United States, decentralization has been at the center stage of policy experiments in the last two decades in a large number of developing and transition economies in Latin America, Africa and Asia. The World Bank, for example, has embraced it as one of the major governance reforms on its agenda (for example, World Bank, 2000; Burki, Perry and Dillinger, 1999). Take also the examples of the two largest countries of the world, China and India. Decentralization has been regarded as the major institutional framework for the phenomenal industrial growth in the last two decades in China, taking place largely in the nonstate nonprivate sector. India ushered in a landmark constitutional reform in favor of decentralization around the same time it launched a major program of economic reform in the early 1990s. On account of its many failures, the centralized state everywhere has lost a great deal of legitimacy, and decentralization is widely believed to promise a range of bene ts. It is often suggested as a way of reducing the role of the state in general, by fragmenting central authority and introducing more intergovernmental competition and checks and balances. It is viewed as a way to make government more responsive and efcient. Technological changes have also made it somewhat easier than before to provide public services (like electricity and water supply) relatively ef ciently in smaller market areas, and the lower levels of government have now a greater ability to handle certain tasks. In a world of rampant ethnic con icts and separatist movements, decentralization is also regarded as a way of diffusing social and political tensions and ensuring local cultural and political autonomy. These potential bene ts of decentralization have attracted a very diverse range",
"title": ""
},
{
"docid": "89bc9f4c3f61c83348c02f9905923e1d",
"text": "This paper presents the control strategy and power management for an integrated three-port converter, which interfaces one solar input port, one bidirectional battery port, and an isolated output port. Multimode operations and multiloop designs are vital for such multiport converters. However, control design is difficult for a multiport converter to achieve multifunctional power management because of various cross-coupled control loops. Since there are various modes of operation, it is challenging to define different modes and to further implement autonomous mode transition based on the energy state of the three power ports. A competitive method is used to realize smooth and seamless mode transition. Multiport converter has plenty of interacting control loops due to integrated power trains. It is difficult to design close-loop controls without proper decoupling method. A detailed approach is provided utilizing state-space averaging method to obtain the converter model under different modes of operation, and then a decoupling network is introduced to allow separate controller designs. Simulation and experimental results verify the converter control design and power management during various operational modes.",
"title": ""
},
{
"docid": "f828ffe5d66a98ae75c48971ba9e66b6",
"text": "BACKGROUND\nThe purpose of this study is to review our experience with the use of the facial artery musculo-mucosal (FAMM) flap for floor of mouth (FOM) reconstruction following cancer ablation to assess its reliability, associated complications, and functional results.\n\n\nMETHODS\nThis was a retrospective analysis of 61 FAMM flaps performed for FOM reconstruction from 1997 to 2006.\n\n\nRESULTS\nNo total flap loss was observed. Fifteen cases of partial flap necrosis occurred, with 2 of them requiring revision surgery. We encountered 8 other complications, with 4 of them requiring revision surgery for an overall rate of revision surgery of 10% (6/61). The majority of patients resumed to a regular diet (85%), and speech was considered as functional and/or understandable by the surgeon in 93% of the patients. Dental restoration was successful for 83% (24/29) of the patients.\n\n\nCONCLUSION\nThe FAMM flap is well suited for FOM reconstruction because it is reliable, has few significant complications, and allows preservation of oral function.",
"title": ""
},
{
"docid": "b6d71f472848de18eadff0944eab6191",
"text": "Traditional approaches for object discovery assume that there are common characteristics among objects, and then attempt to extract features specific to objects in order to discriminate objects from background. However, the assumption “common features” may not hold, considering different variations between and within objects. Instead, we look at this problem from a different angle: if we can identify background regions, then the rest should belong to foreground. In this paper, we propose to model background to localize possible object regions. Our method is based on the observations: (1) background has limited categories, such as sky, tree, water, ground, etc., and can be easier to recognize, while there are millions of objects in our world with different shapes, colors and textures; (2) background is occluded because of foreground objects. Thus, we can localize objects based on voting from fore/background occlusion boundary. Our contribution lies: (1) we use graph-based image segmentation to yield high quality segments, which effectively leverages both flat segmentation and hierarchical segmentation approaches; (2) we model background to infer and rank object hypotheses. More specifically, we use background appearance and discriminative patches around fore/background boundary to build the background model. The experimental results show that our method can generate good quality object proposals and rank them where objects are covered highly within a small pool of proposed regions. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d437d71047b70736f5a6cbf3724d62a9",
"text": "We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoderdecoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) “fool” pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.",
"title": ""
},
{
"docid": "88398c81a8706b97f427c12d63ec62cc",
"text": "In processing human produced text using natural language processing (NLP) techniques, two fundamental subtasks that arise are (i) segmentation of the plain text into meaningful subunits (e.g., entities), and (ii) dependency parsing, to establish relations between subunits. Such structural interpretation of text provides essential building blocks for upstream expert system tasks: e.g., from interpreting textual real estate ads, one may want to provide an accurate price estimate and/or provide selection filters for end users looking for a particular property — which all could rely on knowing the types and number of rooms, etc. In this paper we develop a relatively simple and effective neural joint model that performs both segmentation and dependency parsing together, instead of one after the other as in most state-of-the-art works. We will focus in particular on the real estate ad setting, aiming to convert an ad to a structured description, which we name property tree, comprising the tasks of (1) identifying important entities of a property (e.g., rooms) from classifieds and (2) structuring them into a tree format. In this work, we propose a new joint model that is able to tackle the two tasks simultaneously and construct the property tree by (i) avoiding the error propagation that would arise from the subtasks one after the other in a pipelined fashion, and (ii) exploiting the interactions between the subtasks. For this purpose, we perform an extensive comparative study of the pipeline methods and the new proposed ∗Corresponding author Email addresses: [email protected] (Giannis Bekoulis), [email protected] (Johannes Deleu), [email protected] (Thomas Demeester), [email protected] (Chris Develder) Preprint submitted to Expert Systems with Applications February 23, 2018 joint model, reporting an improvement of over three percentage points in the overall edge F1 score of the property tree. Also, we propose attention methods, to encourage our model to focus on salient tokens during the construction of the property tree. Thus we experimentally demonstrate the usefulness of attentive neural architectures for the proposed joint model, showcasing a further improvement of two percentage points in edge F1 score for our application. While the results demonstrated are for the particular real estate setting, the model is generic in nature, and thus could be equally applied to other expert system scenarios requiring the general tasks of both (i) detecting entities (segmentation) and (ii) establishing relations among them (dependency parsing).",
"title": ""
},
{
"docid": "c82f4117c7c96d0650eff810f539c424",
"text": "The Stock Market is known for its volatile and unstable nature. A particular stock could be thriving in one period and declining in the next. Stock traders make money from buying equity when they are at their lowest and selling when they are at their highest. The logical question would be: \"What Causes Stock Prices To Change?\". At the most fundamental level, the answer to this would be the demand and supply. In reality, there are many theories as to why stock prices fluctuate, but there is no generic theory that explains all, simply because not all stocks are identical, and one theory that may apply for today, may not necessarily apply for tomorrow. This paper covers various approaches taken to attempt to predict the stock market without extensive prior knowledge or experience in the subject area, highlighting the advantages and limitations of the different techniques such as regression and classification. We formulate both short term and long term predictions. Through experimentation we achieve 81% accuracy for future trend direction using classification, 0.0117 RMSE for next day price and 0.0613 RMSE for next day change in price using regression techniques. The results obtained in this paper are achieved using only historic prices and technical indicators. Various methods, tools and evaluation techniques will be assessed throughout the course of this paper, the result of this contributes as to which techniques will be selected and enhanced in the final artefact of a stock prediction model. Further work will be conducted utilising deep learning techniques to approach the problem. This paper will serve as a preliminary guide to researchers wishing to expose themselves to this area.",
"title": ""
},
{
"docid": "e9b942c71646f2907de65c2641329a66",
"text": "In many vision based application identifying moving objects is important and critical task. For different computer vision application Background subtraction is fast way to detect moving object. Background subtraction separates the foreground from background. However, background subtraction is unable to remove shadow from foreground. Moving cast shadow associated with moving object also gets detected making it challenge for video surveillance. The shadow makes it difficult to detect the exact shape of object and to recognize the object.",
"title": ""
},
{
"docid": "4320278dcbf0446daf3d919c21606208",
"text": "The operation of different brain systems involved in different types of memory is described. One is a system in the primate orbitofrontal cortex and amygdala involved in representing rewards and punishers, and in learning stimulus-reinforcer associations. This system is involved in emotion and motivation. A second system in the temporal cortical visual areas is involved in learning invariant representations of objects. A third system in the hippocampus is implicated in episodic memory and in spatial function. Fourth, brain systems in the frontal and temporal cortices involved in short term memory are described. The approach taken provides insight into the neuronal operations that take place in each of these brain systems, and has the aim of leading to quantitative biologically plausible neuronal network models of how each of these memory systems actually operates.",
"title": ""
},
{
"docid": "b2a43491283732082c65f88c9b03016f",
"text": "BACKGROUND\nExpressing breast milk has become increasingly prevalent, particularly in some developed countries. Concurrently, breast pumps have evolved to be more sophisticated and aesthetically appealing, adapted for domestic use, and have become more readily available. In the past, expressed breast milk feeding was predominantly for those infants who were premature, small or unwell; however it has become increasingly common for healthy term infants. The aim of this paper is to systematically explore the literature related to breast milk expressing by women who have healthy term infants, including the prevalence of breast milk expressing, reported reasons for, methods of, and outcomes related to, expressing.\n\n\nMETHODS\nDatabases (Medline, CINAHL, JSTOR, ProQuest Central, PsycINFO, PubMed and the Cochrane library) were searched using the keywords milk expression, breast milk expression, breast milk pumping, prevalence, outcomes, statistics and data, with no limit on year of publication. Reference lists of identified papers were also examined. A hand-search was conducted at the Australian Breastfeeding Association Lactation Resource Centre. Only English language papers were included. All papers about expressing breast milk for healthy term infants were considered for inclusion, with a focus on the prevalence, methods, reasons for and outcomes of breast milk expression.\n\n\nRESULTS\nA total of twenty two papers were relevant to breast milk expression, but only seven papers reported the prevalence and/or outcomes of expressing amongst mothers of well term infants; all of the identified papers were published between 1999 and 2012. Many were descriptive rather than analytical and some were commentaries which included calls for more research, more dialogue and clearer definitions of breastfeeding. While some studies found an association between expressing and the success and duration of breastfeeding, others found the opposite. In some cases these inconsistencies were compounded by imprecise definitions of breastfeeding and breast milk feeding.\n\n\nCONCLUSIONS\nThere is limited evidence about the prevalence and outcomes of expressing breast milk amongst mothers of healthy term infants. The practice of expressing breast milk has increased along with the commercial availability of a range of infant feeding equipment. The reasons for expressing have become more complex while the outcomes, when they have been examined, are contradictory.",
"title": ""
},
{
"docid": "b0ce4a13ea4a2401de4978b6859c5ef2",
"text": "We propose to unify a variety of existing semantic classification tasks, such as semantic role labeling, anaphora resolution, and paraphrase detection, under the heading of Recognizing Textual Entailment (RTE). We present a general strategy to automatically generate one or more sentential hypotheses based on an input sentence and pre-existing manual semantic annotations. The resulting suite of datasets enables us to probe a statistical RTE model’s performance on different aspects of semantics. We demonstrate the value of this approach by investigating the behavior of a popular neural network RTE model.",
"title": ""
},
{
"docid": "c84ef3f7dfa5e3219a6c1c2f98109651",
"text": "We present JetStream, a system that allows real-time analysis of large, widely-distributed changing data sets. Traditional approaches to distributed analytics require users to specify in advance which data is to be backhauled to a central location for analysis. This is a poor match for domains where available bandwidth is scarce and it is infeasible to collect all potentially useful data. JetStream addresses bandwidth limits in two ways, both of which are explicit in the programming model. The system incorporates structured storage in the form of OLAP data cubes, so data can be stored for analysis near where it is generated. Using cubes, queries can aggregate data in ways and locations of their choosing. The system also includes adaptive filtering and other transformations that adjusts data quality to match available bandwidth. Many bandwidth-saving transformations are possible; we discuss which are appropriate for which data and how they can best be combined. We implemented a range of analytic queries on web request logs and image data. Queries could be expressed in a few lines of code. Using structured storage on source nodes conserved network bandwidth by allowing data to be collected only when needed to fulfill queries. Our adaptive control mechanisms are responsive enough to keep end-to-end latency within a few seconds, even when available bandwidth drops by a factor of two, and are flexible enough to express practical policies.",
"title": ""
},
{
"docid": "f9a8b4d32d23c7779ee3ea00e4d64980",
"text": "BACKGROUND\nLabor is one of the most painful events in a women's life. Frequent change in positions and back massage may be effective in reducing pain during the first stage of labor.\n\n\nAIM\nThe focus of this study was to identify the impact of either change in position or back massage on pain perception during first stage of labor.\n\n\nDESIGN\nA quasi-experimental study.\n\n\nSETTING\nTeaching hospital, Kurdistan Region, Iraq, November 2014 to October 2015.\n\n\nSUBJECTS\nEighty women were interviewed as a study sample when admitted to the labor and delivery area and divided into three groups: 20 women received frequent changes in position (group A), 20 women received back massage (Group B), and 40 women constituted the control group (group C).\n\n\nMETHODS\nA structured interview questionnaire to collect background data was completed by the researcher in personal interviews with the mothers. The intervention was performed at three points in each group, and pain perception was measured after each intervention using the Face Pain Scale.\n\n\nRESULTS\nThe mean rank of the difference in pain scores among the study groups was as follows after the first, second, and third interventions, respectively: group A-52.33, 47.00, 49.2; group B-32.8, 30.28, 30.38; group C-38.44, 42.36, 41.21. There were significant differences between groups A, B, and C after the first, second, and third interventions (p1 = .011, p2 = .042, p3 = .024).\n\n\nCONCLUSIONS\nBack massage may be a more effective pain management approach than change in position during the first stage of labor.",
"title": ""
},
{
"docid": "843e7bfe22d8b93852374dde8715ca42",
"text": "In this paper, we formalize the idea behind capsule nets of using a capsule vector rather than a neuron activation to predict the label of samples. To this end, we propose to learn a group of capsule subspaces onto which an input feature vector is projected. Then the lengths of resultant capsules are used to score the probability of belonging to different classes. We train such a Capsule Projection Network (CapProNet) by learning an orthogonal projection matrix for each capsule subspace, and show that each capsule subspace is updated until it contains input feature vectors corresponding to the associated class. We will also show that the capsule projection can be viewed as normalizing the multiple columns of the weight matrix simultaneously to form an orthogonal basis, which makes it more effective in incorporating novel components of input features to update capsule representations. In other words, the capsule projection can be viewed as a multi-dimensional weight normalization in capsule subspaces, where the conventional weight normalization is simply a special case of the capsule projection onto 1D lines. Only a small negligible computing overhead is incurred to train the network in low-dimensional capsule subspaces or through an alternative hyper-power iteration to estimate the normalization matrix. Experiment results on image datasets show the presented model can greatly improve the performance of the state-of-the-art ResNet backbones by 10− 20% and that of the Densenet by 5− 7% respectively at the same level of computing and memory expenses. The CapProNet establishes the competitive state-of-the-art performance for the family of capsule nets by significantly reducing test errors on the benchmark datasets.",
"title": ""
},
{
"docid": "36c11c29f6605f7c234e68ecba2a717a",
"text": "BACKGROUND\nThe main purpose of this study was to identify factors that influence healthcare quality in the Iranian context.\n\n\nMETHODS\nExploratory in-depth individual and focus group interviews were conducted with 222 healthcare stakeholders including healthcare providers, managers, policy-makers, and payers to identify factors affecting the quality of healthcare services provided in Iranian healthcare organisations.\n\n\nRESULTS\nQuality in healthcare is a production of cooperation between the patient and the healthcare provider in a supportive environment. Personal factors of the provider and the patient, and factors pertaining to the healthcare organisation, healthcare system, and the broader environment affect healthcare service quality. Healthcare quality can be improved by supportive visionary leadership, proper planning, education and training, availability of resources, effective management of resources, employees and processes, and collaboration and cooperation among providers.\n\n\nCONCLUSION\nThis article contributes to healthcare theory and practice by developing a conceptual framework that provides policy-makers and managers a practical understanding of factors that affect healthcare service quality.",
"title": ""
},
{
"docid": "98cebe058fccdf7ec799dfc95afd2e78",
"text": "An intuitionistic fuzzy set, characterized by a membership function and a non-membership function, is a generalization of fuzzy set. In this paper, based on score function and accuracy function, we introduce a method for the comparison between two intuitionistic fuzzy values and then develop some aggregation operators, such as the intuitionistic fuzzy weighted averaging operator, intuitionistic fuzzy ordered weighted averaging operator, and intuitionistic fuzzy hybrid aggregation operator, for aggregating intuitionistic fuzzy values and establish various properties of these operators.",
"title": ""
},
{
"docid": "b0636710e1374bb098bf4f68c1c5740a",
"text": "Successful use of ICT requires domain knowledge and interaction knowledge. It shapes and is shaped by the use of ICT and is less common among older adults. This paper focus on the validation of the computer literacy scale (CLS) introduced by [14]. The CLS is an objective knowledge test of ICT-related symbols and terms commonly used in the graphical user interface of interactive computer technology. It has been designed specifically for older adults with little computer knowledge and is based on the idea that knowing common symbols and terms is as necessary for using computers, as it is for reading and writing letters and books. In this paper the Computer literacy scale is described and compared with related meas‐ ures for example computer expertise (CE), Computer Proficiency (CPQ) and computer anxiety (CATS). In addition criterion validity is described with predic‐ tions of successful ICT use exemplified with (1) the use of different data entry methods and (2) the use of different ticket vending machine (TVM) designs.",
"title": ""
},
{
"docid": "9900d928d601e62cf8480cb28d3574e9",
"text": "Cellular technology has dramatically changed our society and the way we communicate. First it impacted voice telephony, and then has been making inroads into data access, applications, and services. However, today potential capabilities of the Internet have not yet been fully exploited by cellular systems. With the advent of 5G we will have the opportunity to leapfrog beyond current Internet capabilities.",
"title": ""
},
{
"docid": "95196bd9be49b426217b7d81fc51a04b",
"text": "This paper builds on the idea that private sector logistics can and should be applied to improve the performance of disaster logistics but that before embarking on this the private sector needs to understand the core capabilities of humanitarian logistics. With this in mind, the paper walks us through the complexities of managing supply chains in humanitarian settings. It pinpoints the cross learning potential for both the humanitarian and private sectors in emergency relief operations as well as possibilities of getting involved through corporate social responsibility. It also outlines strategies for better preparedness and the need for supply chains to be agile, adaptable and aligned—a core competency of many humanitarian organizations involved in disaster relief and an area which the private sector could draw on to improve their own competitive edge. Finally, the article states the case for closer collaboration between humanitarians, businesses and academics to achieve better and more effective supply chains to respond to the complexities of today’s logistics be it the private sector or relieving the lives of those blighted by disaster. Journal of the Operational Research Society (2006) 57, 475–489. doi:10.1057/palgrave.jors.2602125 Published online 14 December 2005",
"title": ""
}
] |
scidocsrr
|
85a04042bf93360f558b46066d525295
|
A Low-Power Bidirectional Telemetry Device With a Near-Field Charging Feature for a Cardiac Microstimulator
|
[
{
"docid": "3c7154162996f3fecbedd2aa79555ca4",
"text": "This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-/spl mu/m 1M/2P N-epi BiCMOS, and the AMI 1.5-/spl mu/m 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm/sup 2/ in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.",
"title": ""
}
] |
[
{
"docid": "1d26fc3a5f07e7ea678753e7171846c4",
"text": "Data uncertainty is an inherent property in various applications due to reasons such as outdated sources or imprecise measurement. When data mining techniques are applied to these data, their uncertainty has to be considered to obtain high quality results. We present UK-means clustering, an algorithm that enhances the K-means algorithm to handle data uncertainty. We apply UKmeans to the particular pattern of moving-object uncertainty. Experimental results show that by considering uncertainty, a clustering algorithm can produce more accurate results.",
"title": ""
},
{
"docid": "2a4e5635e2c15ce8ed84e6e296c4bbf4",
"text": "The games with a purpose paradigm proposed by Luis von Ahn [9] is a new approach for game design where useful but boring tasks, like labeling a random image found in the web, are packed within a game to make them entertaining. But there are not only large numbers of internet users that can be used as voluntary data producers but legions of mobile device owners, too. In this paper we describe the design of a location-based mobile game with a purpose: CityExplorer. The purpose of this game is to produce geospatial data that is useful for non-gaming applications like a location-based service. From the analysis of four use case studies of CityExplorer we report that such a purposeful game is entertaining and can produce rich geospatial data collections.",
"title": ""
},
{
"docid": "cbda3aafb8d8f76a8be24191e2fa7c54",
"text": "With the rapid development of robot and other intelligent and autonomous agents, how a human could be influenced by a robot’s expressed mood when making decisions becomes a crucial question in human-robot interaction. In this pilot study, we investigate (1) in what way a robot can express a certain mood to influence a human’s decision making behavioral model; (2) how and to what extent the human will be influenced in a game theoretic setting. More specifically, we create an NLP model to generate sentences that adhere to a specific affective expression profile. We use these sentences for a humanoid robot as it plays a Stackelberg security game against a human. We investigate the behavioral model of the human player.",
"title": ""
},
{
"docid": "9611686ff4eedf047460becec43ce59d",
"text": "We propose a novel location-based second-factor authentication solution for modern smartphones. We demonstrate our solution in the context of point of sale transactions and show how it can be effectively used for the detection of fraudulent transactions caused by card theft or counterfeiting. Our scheme makes use of Trusted Execution Environments (TEEs), such as ARM TrustZone, commonly available on modern smartphones, and resists strong attackers, even those capable of compromising the victim phone applications and OS. It does not require any changes in the user behavior at the point of sale or to the deployed terminals. In particular, we show that practical deployment of smartphone-based second-factor authentication requires a secure enrollment phase that binds the user to his smartphone TEE and allows convenient device migration. We then propose two novel enrollment schemes that resist targeted attacks and provide easy migration. We implement our solution within available platforms and show that it is indeed realizable, can be deployed with small software changes, and does not hinder user experience.",
"title": ""
},
{
"docid": "1c68d660e00040c73c043de47bf6d9e0",
"text": "In Germany 18 GW wind power will have been installed by the end of 2005. Until 2020, this figure reaches the 50 GW mark. Based on the results of recent studies and on the experience with existing wind projects modification of the existing grid code for connection and operation of wind farms in the high voltage grid is necessary. The paper discusses main issues of the suggested requirements by highlighting major changes and extensions. The topics considered are fault ride-through, grid voltage maintenance respective voltage control, system monitoring and protection as well as retrofitting of old units. The new requirements are defined taking into account some new developments in wind turbine technologies which should be utilized in the future to meet grid requirement. Monitoring and system protection is defined under the aspect of sustainability of the measures introduced",
"title": ""
},
{
"docid": "0872240a9df85e190bddc4d3f037381f",
"text": "This study presents a unique synthesized set of data for community college students entering the university with the intention of earning a degree in engineering. Several cohorts of longitudinal data were combined with transcript-level data from both the community college and the university to measure graduation rates in engineering. The emphasis of the study is to determine academic variables that had significant correlations with graduation in engineering, and levels of these academic variables. The article also examines the utility of data mining methods for understanding the academic variables related to achievement in science, technology, engineering, and mathematics. The practical purpose of each model is to develop a useful strategy for policy, based on success variables, that relates to the preparation and achievement of this important group of students as they move through the community college pathway.",
"title": ""
},
{
"docid": "a574355d46c6e26efe67aefe2869a0cb",
"text": "The continuously increasing cost of the US healthcare system has received significant attention. Central to the ideas aimed at curbing this trend is the use of technology in the form of the mandate to implement electronic health records (EHRs). EHRs consist of patient information such as demographics, medications, laboratory test results, diagnosis codes, and procedures. Mining EHRs could lead to improvement in patient health management as EHRs contain detailed information related to disease prognosis for large patient populations. In this article, we provide a structured and comprehensive overview of data mining techniques for modeling EHRs. We first provide a detailed understanding of the major application areas to which EHR mining has been applied and then discuss the nature of EHR data and its accompanying challenges. Next, we describe major approaches used for EHR mining, the metrics associated with EHRs, and the various study designs. With this foundation, we then provide a systematic and methodological organization of existing data mining techniques used to model EHRs and discuss ideas for future research.",
"title": ""
},
{
"docid": "bebd034597144d4656f6383d9bd22038",
"text": "The Turing test aimed to recognize the behavior of a human from that of a computer algorithm. Such challenge is more relevant than ever in today’s social media context, where limited attention and technology constrain the expressive power of humans, while incentives abound to develop software agents mimicking humans. These social bots interact, often unnoticed, with real people in social media ecosystems, but their abundance is uncertain. While many bots are benign, one can design harmful bots with the goals of persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then review current efforts to detect social bots on Twitter. Features related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.",
"title": ""
},
{
"docid": "935445679a3e94f96bcb05a947363995",
"text": "While theories abound concerning knowledge transfer in organisations, little empirical work has been undertaken to assess any possible relationship between repositories of knowledge and those responsible for the use of knowledge. This paper develops a knowledge transfer framework based on an empirical analysis of part of the UK operation of a Fortune 100 corporation, which extends existing knowledge transfer theory. The proposed framework integrates knowledge storage and knowledge administration within a model of effective knowledge transfer. This integrated framework encompasses five components: the actors engaged in the transfer of knowledge, the typology of organisational knowledge that is transferred between the actors, the mechanisms by which the knowledge transfer is carried out, the repositories where explicit knowledge is retained and the knowledge administrator equivalent whose function is to manage and maintain knowledge. The paper concludes that a ‘hybridisation’ of knowledge transfer approach, revealed by the framework, offers some promise in organisational applications.",
"title": ""
},
{
"docid": "13659d5f693129620132bf22e021ad70",
"text": "Individuals with high functioning autism (HFA) or Asperger Syndrome (AS) exhibit difficulties in the knowledge or correct performance of social skills. This subgroup's social difficulties appear to be associated with deficits in three social cognition processes: theory of mind, emotion recognition and executive functioning. The current study outlines the development and initial administration of the group-based Social Competence Intervention (SCI), which targeted these deficits using cognitive behavioral principles. Across 27 students age 11-14 with a HFA/AS diagnosis, results indicated significant improvement on parent reports of social skills and executive functioning. Participants evidenced significant growth on direct assessments measuring facial expression recognition, theory of mind and problem solving. SCI appears promising, however, larger samples and application in naturalistic settings are warranted.",
"title": ""
},
{
"docid": "6ccca10914c09715fae47a7b832bfd6a",
"text": "This paper provides an overview of the Internet of Things (IoT) with emphasis on enabling technologies, protocols, and application issues. The IoT is enabled by the latest developments in RFID, smart sensors, communication technologies, and Internet protocols. The basic premise is to have smart sensors collaborate directly without human involvement to deliver a new class of applications. The current revolution in Internet, mobile, and machine-to-machine (M2M) technologies can be seen as the first phase of the IoT. In the coming years, the IoT is expected to bridge diverse technologies to enable new applications by connecting physical objects together in support of intelligent decision making. This paper starts by providing a horizontal overview of the IoT. Then, we give an overview of some technical details that pertain to the IoT enabling technologies, protocols, and applications. Compared to other survey papers in the field, our objective is to provide a more thorough summary of the most relevant protocols and application issues to enable researchers and application developers to get up to speed quickly on how the different protocols fit together to deliver desired functionalities without having to go through RFCs and the standards specifications. We also provide an overview of some of the key IoT challenges presented in the recent literature and provide a summary of related research work. Moreover, we explore the relation between the IoT and other emerging technologies including big data analytics and cloud and fog computing. We also present the need for better horizontal integration among IoT services. Finally, we present detailed service use-cases to illustrate how the different protocols presented in the paper fit together to deliver desired IoT services.",
"title": ""
},
{
"docid": "526fd32e2486338a1db4228bdaa9aaaf",
"text": "Significant vulnerabilities have recently been identified in collaborative filtering recommender systems. Researchers have shown that attackers can manipulate a system's recommendations by injecting biased profiles into it. In this paper, we examine attacks that concentrate on a targeted set of users with similar tastes, biasing the system's responses to these users. We show that such attacks are both pragmatically reasonable and also highly effective against both user-based and item-based algorithms. As a result, an attacker can mount such a \"segmented\" attack with little knowledge of the specific system being targeted and with strong likelihood of success.",
"title": ""
},
{
"docid": "6931f8727f2c4e2aab19c94bcd783f59",
"text": "The steady-state and dynamic performance of a stator voltage-controlled current source inverter (CSI) induction motor drive are presented. Commutation effects are neglected and the analytical results are based on the fundamental component. A synchronously rotating reference frame linearized model in terms of a set of nondimensional parameters, based on the rotor transient time constant, is developed. It is shown that the control scheme is capable of stabilizing the drive over a region identical to the statically stable region of a conventional voltage-fed induction motor. A simple approximate expression for the drive dominant poles under no-load conditions and graphical representations of the drive dynamics under load conditions are presented. The effect of parameter variations on the drive dynamic response can be evaluated from these results. An analog simulation of the drive is developed, and the results confirm the small signal analysis of the drive system. In addition the steady-state results of the analog simulation are compared with experimental results, as well as with corresponding values obtained from a stator referred equivalent circuit. The comparison indicates good correspondence under load conditions and the limitation of applying the equivalent circuit for no-load conditions without proper recognition of the system losses.",
"title": ""
},
{
"docid": "fbb76049d6192e4571ede961f1e413a8",
"text": "We present ongoing work on a gold standard annotation of German terminology in an inhomogeneous domain. The text basis is thematically broad and contains various registers, from expert text to user-generated data taken from an online discussion forum. We identify issues related with these properties, and show our approach how to model the domain. Futhermore, we present our approach to handle multiword terms, including discontinuous ones. Finally, we evaluate the annotation quality.",
"title": ""
},
{
"docid": "06856cf61207a99146782e9e6e0911ef",
"text": "Customer ratings are valuable sources to understand their satisfaction and are critical for designing better customer experiences and recommendations. The majority of customers, however, do not respond to rating surveys, which makes the result less representative. To understand overall satisfaction, this paper aims to investigate how likely customers without responses had satisfactory experiences compared to those respondents. To infer customer satisfaction of such unlabeled sessions, we propose models using recurrent neural networks (RNNs) that learn continuous representations of unstructured text conversation. By analyzing online chat logs of over 170,000 sessions from Samsung’s customer service department, we make a novel finding that while labeled sessions contributed by a small fraction of customers received overwhelmingly positive reviews, the majority of unlabeled sessions would have received lower ratings by customers. The data analytics presented in this paper not only have practical implications for helping detect dissatisfied customers on live chat services but also make theoretical contributions on discovering the level of biases in online rating platforms. ACM Reference Format: Kunwoo Park, Meeyoung Cha, and Eunhee Rhim. 2018. Positivity Bias in Customer Satisfaction Ratings. InWWW ’18 Companion: The 2018 Web Conference Companion, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3184558.3186579",
"title": ""
},
{
"docid": "c4bc226e59648be0191b95b91b3b9f33",
"text": "In this paper we present a new class of side-channel attacks on computer hard drives. Hard drives contain one or more spinning disks made of a magnetic material. In addition, they contain different magnets which rapidly move the head to a target position on the disk to perform a write or a read. The magnetic fields from the disk’s material and head are weak and well shielded. However, we show that the magnetic field due to the moving head can be picked up by sensors outside of the hard drive. With these measurements, we are able to deduce patterns about ongoing operations. For example, we can detect what type of the operating system is booting up or what application is being started. Most importantly, no special equipment is necessary. All attacks can be performed by using an unmodified smartphone placed in proximity of a hard drive.",
"title": ""
},
{
"docid": "07eb3f5527e985c33ff7132381ee266d",
"text": "Since the first application of indirect composite resins, numerous advances in adhesive dentistry have been made. Furthermore, improvements in structure, composition and polymerization techniques led to the development of a second-generation of indirect resin composites (IRCs). IRCs have optimal esthetic performance, enhanced mechanical properties and reparability. Due to these characteristics they can be used for a wide range of clinical applications. IRCs can be used for inlays, onlays, crowns’ veneering material, fixed dentures prostheses and removable prostheses (teeth and soft tissue substitution), both on teeth and implants. The purpose of this article is to review the properties of these materials and describe a case series of patients treated with different type of restorations in various indications. *Corresponding author: Aikaterini Petropoulou, Clinical Instructor, Department of Prosthodontics, School of Dentistry, National and Kapodistrian University of Athens, Greece, Tel: +306932989104; E-mail: [email protected] Received November 10, 2013; Accepted November 28, 2013; Published November 30, 2013 Citation: Petropoulou A, Pantzari F, Nomikos N, Chronopoulos V, Kourtis S (2013) The Use of Indirect Resin Composites in Clinical Practice: A Case Series. Dentistry 3: 173. doi:10.4172/2161-1122.1000173 Copyright: © 2013 Petropoulou A, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.",
"title": ""
},
{
"docid": "25098f36d4a782911f523ce1ae20cf31",
"text": "The problem of stress-management has been receiving an increasing attention in related research communities due to a wider recognition of potential problems caused by chronic stress and due to the recent developments of technologies providing non-intrusive ways of collecting continuously objective measurements to monitor person's stress level. Experimental studies have shown already that stress level can be judged based on the analysis of Galvanic Skin Response (GSR) and speech signals. In this paper we investigate how classification techniques can be used to automatically determine periods of acute stress relying on information contained in GSR and/or speech of a person.",
"title": ""
},
{
"docid": "1949871b7c32416061043b46f7ed581c",
"text": "Privacy is an important issue in data publishing. Many organizations distribute non-aggregate personal data for research, and they must take steps to ensure that an adversary cannot predict sensitive information pertaining to individuals with high confidence. This problem is further complicated by the fact that, in addition to the published data, the adversary may also have access to other resources (e.g., public records and social networks relating individuals), which we call external knowledge. A robust privacy criterion should take this external knowledge into consideration. In this paper, we first describe a general framework for reasoning about privacy in the presence of external knowledge. Within this framework, we propose a novel multidimensional approach to quantifying an adversary’s external knowledge. This approach allows the publishing organization to investigate privacy threats and enforce privacy requirements in the presence of various types and amounts of external knowledge. Our main technical contributions include a multidimensional privacy criterion that is more intuitive and flexible than previous approaches to modeling background knowledge. In addition, we provide algorithms for measuring disclosure and sanitizing data that improve computational efficiency several orders of magnitude over the best known techniques.",
"title": ""
},
{
"docid": "c66e38f3be7760c8ca0b6ef2dfc5bec2",
"text": "Gesture recognition remains a very challenging task in the field of computer vision and human computer interaction (HCI). A decade ago the task seemed to be almost unsolvable with the data provided by a single RGB camera. Due to recent advances in sensing technologies, such as time-of-flight and structured light cameras, there are new data sources available, which make hand gesture recognition more feasible. In this work, we propose a highly precise method to recognize static gestures from a depth data, provided from one of the above mentioned devices. The depth images are used to derive rotation-, translation- and scale-invariant features. A multi-layered random forest (MLRF) is then trained to classify the feature vectors, which yields to the recognition of the hand signs. The training time and memory required by MLRF are much smaller, compared to a simple random forest with equivalent precision. This allows to repeat the training procedure of MLRF without significant effort. To show the advantages of our technique, we evaluate our algorithm on synthetic data, on publicly available dataset, containing 24 signs from American Sign Language(ASL) and on a new dataset, collected using recently appeared Intel Creative Gesture Camera.",
"title": ""
}
] |
scidocsrr
|
916b4f36791517fc8c322d6773bacd75
|
Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images
|
[
{
"docid": "c8e8d82af2d8d2c6c51b506b4f26533f",
"text": "We present an efficient method for detecting anomalies in videos. Recent applications of convolutional neural networks have shown promises of convolutional layers for object detection and recognition, especially in images. However, convolutional neural networks are supervised and require labels as learning signals. We propose a spatiotemporal architecture for anomaly detection in videos including crowded scenes. Our architecture includes two main components, one for spatial feature representation, and one for learning the temporal evolution of the spatial features. Experimental results on Avenue, Subway and UCSD benchmarks confirm that the detection accuracy of our method is comparable to state-of-the-art methods at a considerable speed of up to 140 fps.",
"title": ""
}
] |
[
{
"docid": "96d123a5c9a01922ebb99623fddd1863",
"text": "Previous studies have shown that Wnt signaling is involved in postnatal mammalian myogenesis; however, the downstream mechanism of Wnt signaling is not fully understood. This study reports that the murine four-and-a-half LIM domain 1 (Fhl1) could be stimulated by β-catenin or LiCl treatment to induce myogenesis. In contrast, knockdown of the Fhl1 gene expression in C2C12 cells led to reduced myotube formation. We also adopted reporter assays to demonstrate that either β-catenin or LiCl significantly activated the Fhl1 promoter, which contains four putative consensus TCF/LEF binding sites. Mutations of two of these sites caused a significant decrease in promoter activity by luciferase reporter assay. Thus, we suggest that Wnt signaling induces muscle cell differentiation, at least partly, through Fhl1 activation.",
"title": ""
},
{
"docid": "4a934aeb23657b8cde97b5cb543f8153",
"text": "Refactoring is recognized as an essential practice in the context of evolutionary and agile software development. Recognizing the importance of the practice, modern IDEs provide some support for low-level refactorings. A notable exception in the list of supported refactorings is the “Extract Class” refactoring, which is conceived to simplify large, complex, unwieldy and less cohesive classes. In this work, we describe a method and a tool, implemented as an Eclipse plugin, designed to fulfill exactly this need. Our method involves three steps: (a) recognition of Extract Class opportunities, (b) ranking of the identified opportunities in terms of the improvement each one is anticipated to bring about to the system design, and (c) fully automated application of the refactoring chosen by the developer. The first step relies on an agglomerative clustering algorithm, which identifies cohesive sets of class members",
"title": ""
},
{
"docid": "2d7251e7c6029dae6e32c742c2ad3709",
"text": "Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory.",
"title": ""
},
{
"docid": "dd5a45464936906e7b4c987274c66839",
"text": "Visual analytic systems, especially mixed-initiative systems, can steer analytical models and adapt views by making inferences from users’ behavioral patterns with the system. Because such systems rely on incorporating implicit and explicit user feedback, they are particularly susceptible to the injection and propagation of human biases. To ultimately guard against the potentially negative effects of systems biased by human users, we must first qualify what we mean by the term bias. Thus, in this paper we describe four different perspectives on human bias that are particularly relevant to visual analytics. We discuss the interplay of human and computer system biases, particularly their roles in mixed-initiative systems. Given that the term bias is used to describe several different concepts, our goal is to facilitate a common language in research and development efforts by encouraging researchers to mindfully choose the perspective(s) considered in their work.",
"title": ""
},
{
"docid": "4faafaa33bca5d8f56cce393e1227019",
"text": "Sodium hypochlorite (NaOCl) is the most common irrigant used in modern endodontics. It is highly effective at dissolving organic debris and disinfecting the root canal system due to the high pH. Extravasation of NaOCl into intra-oral and extra-oral tissues can lead to devastating outcomes leading to long-term functional and aesthetic deficits. Currently no clear guidelines are available which has caused confusion among the dental and oral and maxillofacial (OMFS) surgical community how best to manage these patients. Following a literature review and considering our own experience we have formulated clear and precise guidelines to manage patients with NaOCl injury.",
"title": ""
},
{
"docid": "1b0abb269fcfddc9dd00b3f8a682e873",
"text": "Fully convolutional neural networks (F-CNNs) have set the state-of-the-art in image segmentation for a plethora of applications. Architectural innovations within F-CNNs have mainly focused on improving spatial encoding or network connectivity to aid gradient flow. In this paper, we explore an alternate direction of recalibrating the feature maps adaptively, to boost meaningful features, while suppressing weak ones. We draw inspiration from the recently proposed squeeze & excitation (SE) module for channel recalibration of feature maps for image classification. Towards this end, we introduce three variants of SE modules for image segmentation, (i) squeezing spatially and exciting channel-wise (cSE), (ii) squeezing channel-wise and exciting spatially (sSE) and (iii) concurrent spatial and channel squeeze & excitation (scSE). We effectively incorporate these SE modules within three different state-of-theart F-CNNs (DenseNet, SD-Net, U-Net) and observe consistent improvement of performance across all architectures, while minimally effecting model complexity. Evaluations are performed on two challenging applications: whole brain segmentation on MRI scans and organ segmentation on whole body contrast enhanced CT scans.",
"title": ""
},
{
"docid": "7a7b4a5f5bc4df3372a57f5e0724c685",
"text": "In the Modern scenario, the naturally available resources for power generation are being depleted at an alarming rate; firstly due to wastage of power at consumer end, secondly due to inefficiency of various power system components. A Combined Cycle Gas Turbine (CCGT) integrates two cycles- Brayton cycle (Gas Turbine) and Rankine cycle (Steam Turbine) with the objective of increasing overall plant efficiency. This is accomplished by utilising the exhaust of Gas Turbine through a waste-heat recovery boiler to run a Steam Turbine. The efficiency of a gas turbine which ranges from 28% to 33% can hence be raised to about 60% by recovering some of the low grade thermal energy from the exhaust gas for steam turbine process. This paper is a study for the modelling of CCGT and comparing it with actual operational data. The performance model for CCGT plant was developed in MATLAB/Simulink.",
"title": ""
},
{
"docid": "cf5cd34ea664a81fabe0460e4e040a2d",
"text": "A novel p-trench phase-change memory (PCM) cell and its integration with a MOSFET selector in a standard 0.18 /spl mu/m CMOS technology are presented. The high-performance capabilities of PCM cells are experimentally investigated and their application in embedded systems is discussed. Write times as low as 10 ns and 20 ns have been measured for the RESET and SET operation, respectively, still granting a 10/spl times/ read margin. The impact of the RESET pulse on PCH cell endurance has been also evaluated. Finally, cell distributions and first statistical endurance measurements on a 4 Mbit MOS demonstrator clearly assess the feasibility of the PCM technology.",
"title": ""
},
{
"docid": "b1e8f1b40c3a1ca34228358a2e8d8024",
"text": "When the training and the test data belong to different domains, the accuracy of an object classifier is significantly reduced. Therefore, several algorithms have been proposed in the last years to diminish the so called domain shift between datasets. However, all available evaluation protocols for domain adaptation describe a closed set recognition task, where both domains, namely source and target, contain exactly the same object classes. In this work, we also explore the field of domain adaptation in open sets, which is a more realistic scenario where only a few categories of interest are shared between source and target data. Therefore, we propose a method that fits in both closed and open set scenarios. The approach learns a mapping from the source to the target domain by jointly solving an assignment problem that labels those target instances that potentially belong to the categories of interest present in the source dataset. A thorough evaluation shows that our approach outperforms the state-of-the-art.",
"title": ""
},
{
"docid": "56d8a92810ec9579de73dd9fa7b8f362",
"text": "Recent successes in training large, deep neural networks have prompted active investigation into the representations learned on their intermediate layers. Such research is difficult because it requires making sense of non-linear computations performed by millions of learned parameters, but valuable because it increases our ability to understand current models and training algorithms and thus create improved versions of them. In this paper we investigate the extent to which neural networks exhibit what we call convergent learning, which is when the representations learned by multiple nets converge to a set of features which are either individually similar between networks or where subsets of features span similar lowdimensional spaces. We propose a specific method of probing representations: training multiple networks and then comparing and contrasting their individual, learned representations at the level of neurons or groups of neurons. We begin research into this question by introducing three techniques to approximately align different neural networks on a feature or subspace level: a bipartite matching approach that makes one-to-one assignments between neurons, a sparse prediction and clustering approach that finds one-to-many mappings, and a spectral clustering approach that finds many-to-many mappings. This initial investigation reveals a few interesting, previously unknown properties of neural networks, and we argue that future research into the question of convergent learning will yield many more. The insights described here include (1) that some features are learned reliably in multiple networks, yet other features are not consistently learned; (2) that units learn to span low-dimensional subspaces and, while these subspaces are common to multiple networks, the specific basis vectors learned are not; (3) that the representation codes are a mix between a local (single unit) code and slightly, but not fully, distributed codes across multiple units; (4) that the average activation values of neurons vary considerably within a network, yet the mean activation values across different networks converge to an almost identical distribution. 1",
"title": ""
},
{
"docid": "9806e837e1d988aa2cfb10e7500d2267",
"text": "The high-functioning Autism Spectrum Screening Questionnaire (ASSQ) is a 27-item checklist for completion by lay informants when assessing symptoms characteristic of Asperger syndrome and other high-functioning autism spectrum disorders in children and adolescents with normal intelligence or mild mental retardation. Data for parent and teacher ratings in a clinical sample are presented along with various measures of reliability and validity. Optimal cutoff scores were estimated, using Receiver Operating Characteristic analysis. Findings indicate that the ASSQ is a useful brief screening device for the identification of autism spectrum disorders in clinical settings.",
"title": ""
},
{
"docid": "c9bbd74b5a74d8fac1aa3197b3085104",
"text": "We propose a new image-interpolation technique using a combination of an adaptive directional wavelet transform (ADWT) and discrete cosine transform (DCT). In the proposed method, we use ADWT to decompose the low-resolution image into different frequency subband images. The high-frequency subband images are further transformed to DCT coefficients, and the zero-padding method is used for interpolation. Simultaneously, the low-frequency subband image is replaced by the original low-resolution image. Finally, we generate the interpolated image by combining the original low-resoultion image and the interpolated subband images by using inverse DWT. Experimental results demonstrate that the proposed algorithm yields better quality images in terms of subjective and objective quality metrics compared to the other methods considered in this paper.",
"title": ""
},
{
"docid": "66056b4d6cd15282e676a836cc31f8de",
"text": "In this paper, we propose a new approach for cross-scenario clothing retrieval and fine-grained clothing style recognition. The query clothing photos captured by cameras or other mobile devices are filled with noisy background while the product clothing images online for shopping are usually presented in a pure environment. We tackle this problem by two steps. Firstly, a hierarchical super-pixel merging algorithm based on semantic segmentation is proposed to obtain the intact query clothing item. Secondly, aiming at solving the problem of clothing style recognition in different scenarios, we propose sparse coding based on domain-adaptive dictionary learning to improve the accuracy of the classifier and adaptability of the dictionary. In this way, we obtain fine-grained attributes of the clothing items and use the attributes matching score to re-rank the retrieval results further. The experiment results show that our method outperforms the state-of-the-art approaches. Furthermore, we build a well labeled clothing dataset, where the images are selected from 1.5 billion product clothing images.",
"title": ""
},
{
"docid": "5dfac9fc9612cf386419e95da1652153",
"text": "This paper introduces the concept of strategic advantage and distinguishes it from competitive advantage. This concept helps to explain the full nature of sustainable competitive advantage through uncovering the dynamics of resource-based strategy. A new classification of resources emerges, demonstrating that rents are more relevant than profits in the analysis of sustainable competitive advantage. Introduction The search for sustainable competitive advantage has been the dominant theme in the study of strategy for many years (Bain, 1956; Kay, 1994; Porter, 1980). The “resourcebased view” has recently found favour as making a key contribution to developing and delivering competitive advantage. Within this context, the concept of “core competence” is being presented as a ready-made solution to many, if not all, competitive shortcomings permeating organisations (Collis and Montgomery, 1995; Prahalad and Hamel, 1990). Both the concept of sustainable competitive advantage and the resource-based view, however, limit organisations in understanding the full nature and dynamics of strategy for the following reasons: • Sustainable competitive advantage is a journey and not a destination – it is like tomorrow which is inescapable but never arrives. Sustainable competitive advantage only becomes meaningful when this journey is experienced. For most organisations, however, the problem is how to identify where the journey lies. In fast-moving competitive environments, the nature of the journey itself keeps changing in an unpredictable fashion. As a result, the process of identifying the journey presents the main challenge. • The resource-based view strives to identify and nurture those resources that enable organisations to develop competitive advantage. The primary focus of such an analysis, however, is on the existing resources which are treated as being largely static and unchanging. The problem is that dynamic environments ceaselessly call for a new generation of resources as the context constantly shifts. Given the above considerations, organisations often fail to exploit fully the potential of both the concept of sustainable competitive advantage and the resource-based view. To reverse this situation, it is necessary to develop the competitive advantage and the resources of an organisation as a dynamic concept. This calls for rediscovering sustainable competitive advantage through exploring its origins, together with the processes that make it happen. For this purpose it is first necessary to make explicit what is meant by the terms “sustainability” and “competitive advantage” and then raise the following philosophical and practical questions: • Can the terms “sustainability” and “competitive advantage”, which can be argued to serve different purposes, be brought together in the name of unity of interest? • Is such a unity real or a discursive, aimless marriage? • Can sustainable competitive advantage assume a shared meaning for those who want to make it happen? These questions sound simple but the answers are quite difficult because the purpose of an organisation can potentially be twofold. First, the organisation has to focus on its existing resources in exploiting existing business opportunities. Second, the organisation has to develop, at the same time, a new generation of resources in order to sustain its competitiveness. There is therefore a need to balance living and unborn resources. This balance, which determines the effectiveness of strategy, is achieved when organisations succeed in marrying sustainability and competitive advantage in a way that it does not become a marriage of convenience. Competitive advantage and sustainability: the missing link The term “competitive advantage” has traditionally been described in terms of the attributes and resources of an organisation that allow it to outperform others in the same industry or product market (Christensen and Fahey, 1984; Kay, 1994; Porter, 1980). In contrast, the term “sustainable” considers the protection such attributes and resources have to offer over some usually undefined period of time into the future for the organisation to maintain its competitiveness. Within this context, “sustainable” can assume a number of meanings depending on the frame of reference through which it is viewed. It can be interpreted to mean endurable, defensible,",
"title": ""
},
{
"docid": "32dbbc1b9cc78f2a4db0cffd12cd2467",
"text": "OBJECTIVE\nTo evaluate existing automatic speech-recognition (ASR) systems to measure their performance in interpreting spoken clinical questions and to adapt one ASR system to improve its performance on this task.\n\n\nDESIGN AND MEASUREMENTS\nThe authors evaluated two well-known ASR systems on spoken clinical questions: Nuance Dragon (both generic and medical versions: Nuance Gen and Nuance Med) and the SRI Decipher (the generic version SRI Gen). The authors also explored language model adaptation using more than 4000 clinical questions to improve the SRI system's performance, and profile training to improve the performance of the Nuance Med system. The authors reported the results with the NIST standard word error rate (WER) and further analyzed error patterns at the semantic level.\n\n\nRESULTS\nNuance Gen and Med systems resulted in a WER of 68.1% and 67.4% respectively. The SRI Gen system performed better, attaining a WER of 41.5%. After domain adaptation with a language model, the performance of the SRI system improved 36% to a final WER of 26.7%.\n\n\nCONCLUSION\nWithout modification, two well-known ASR systems do not perform well in interpreting spoken clinical questions. With a simple domain adaptation, one of the ASR systems improved significantly on the clinical question task, indicating the importance of developing domain/genre-specific ASR systems.",
"title": ""
},
{
"docid": "21db70be88df052de82990109941e49a",
"text": "We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Evidence for the room types is generated using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views, as well as from a geometric analysis of the map's 3D structure. The evidence is merged in a conditional random field, using statistics mined from different datasets of indoor environments. We evaluate our approach qualitatively and quantitatively and compare it to related methods.",
"title": ""
},
{
"docid": "8411c13863aeb4338327ea76e0e2725b",
"text": "There is often the need to update an installed Intrusion Detection System (IDS) due to new attack methods or upgraded computing environments. Since many current IDSs are constructed by manual encoding of expert security knowledge, changes to IDSs are expensive and slow. In this paper, we describe a data mining framework for adaptively building Intrusion Detection (ID) models. The central idea is to utilize auditing programs to extract an extensive set of features that describe each network connection or host session, and apply data mining programs to learn rules that accurately capture the behavior of intrusions and normal activities. These rules can then be used for misuse detection and anomaly detection. Detection models for new intrusions or specific components of a network system are incorporated into an existing IDS through a meta-learning (or co-operative learning) process, which produces a meta detection model that combines evidence from multiple models. We discuss the strengths of our data mining programs, namely, classification, meta-learning, association rules, and frequent episodes. We report our results of applying these programs to the (extensively gathered) network audit data from the DARPA Intrusion Detection Evaluation Program.",
"title": ""
},
{
"docid": "c0db1cd3688a18c853331772dbdfdedc",
"text": "In this review we describe the challenges and opportunities for creating magnetically active metamaterials in the optical part of the spectrum. The emphasis is on the sub-wavelength periodic metamaterials whose unit cell is much smaller than the optical wavelength. The conceptual differences between microwave and optical metamaterials are demonstrated. We also describe several theoretical techniques used for calculating the effective parameters of plasmonic metamaterials: the effective dielectric permittivity eff(ω) and magnetic permeability μeff(ω). Several examples of negative permittivity and negative permeability plasmonic metamaterials are used to illustrate the theory. c © 2008 Elsevier Ltd. All rights reserved. PACS: 42.70.-a; 41.20.Gz; 78.67.Bf",
"title": ""
},
{
"docid": "b740fd9a56701ddd8c54d92f45895069",
"text": "In vivo imaging of apoptosis in a preclinical setting in anticancer drug development could provide remarkable advantages in terms of translational medicine. So far, several imaging technologies with different probes have been used to achieve this goal. Here we describe a bioluminescence imaging approach that uses a new formulation of Z-DEVD-aminoluciferin, a caspase 3/7 substrate, to monitor in vivo apoptosis in tumor cells engineered to express luciferase. Upon apoptosis induction, Z-DEVD-aminoluciferin is cleaved by caspase 3/7 releasing aminoluciferin that is now free to react with luciferase generating measurable light. Thus, the activation of caspase 3/7 can be measured by quantifying the bioluminescent signal. Using this approach, we have been able to monitor caspase-3 activation and subsequent apoptosis induction after camptothecin and temozolomide treatment on xenograft mouse models of colon cancer and glioblastoma, respectively. Treated mice showed more than 2-fold induction of Z-DEVD-aminoluciferin luminescent signal when compared to the untreated group. Combining D-luciferin that measures the total tumor burden, with Z-DEVD-aminoluciferin that assesses apoptosis induction via caspase activation, we confirmed that it is possible to follow non-invasively tumor growth inhibition and induction of apoptosis after treatment in the same animal over time. Moreover, here we have proved that following early apoptosis induction by caspase 3 activation is a good biomarker that accurately predicts tumor growth inhibition by anti-cancer drugs in engineered colon cancer and glioblastoma cell lines and in their respective mouse xenograft models.",
"title": ""
},
{
"docid": "2a262a72133922a9232e9a3808341359",
"text": "Autonomous driving has harsh requirements of small model size and energy efficiency, in order to enable the embedded system to achieve real-time on-board object detection. Recent deep convolutional neural network based object detectors have achieved state-of-the-art accuracy. However, such models are trained with numerous parameters and their high computational costs and large storage prohibit the deployment to memory and computation resource limited systems. Lowprecision neural networks are popular techniques for reducing the computation requirements and memory footprint. Among them, binary weight neural network (BWN) is the extreme case which quantizes the float-point into just 1 bit. BWNs are difficult to train and suffer from accuracy deprecation due to the extreme low-bit representation. To address this problem, we propose a knowledge transfer (KT) method to aid the training of BWN using a full-precision teacher network. We built DarkNetand MobileNet-based binary weight YOLO-v2 detectors and conduct experiments on KITTI benchmark for car, pedestrian and cyclist detection. The experimental results show that the proposed method maintains high detection accuracy while reducing the model size of DarkNet-YOLO from 257 MB to 8.8 MB and MobileNet-YOLO from 193 MB to 7.9 MB.",
"title": ""
}
] |
scidocsrr
|
e9b3ef7873e59a6e0ed2e5fb0631237f
|
Large-scale image classification: Fast feature extraction and SVM training
|
[
{
"docid": "1a462dd716d6eb565fa03e0518e8d6eb",
"text": "For large scale learning problems, it is desirable if we can obtain the optimal model parameters by going through the data in only one pass. Polyak and Juditsky (1992) showed that asymptotically the test performance of the simple average of the parameters obtained by stochastic gradient descent (SGD) is as good as that of the parameters which minimize the empirical cost. However, to our knowledge, despite its optimal asymptotic convergence rate, averaged SGD (ASGD) received little attention in recent research on large scale learning. One possible reason is that it may take a prohibitively large number of training samples for ASGD to reach its asymptotic region for most real problems. In this paper, we present a finite sample analysis for the method of Polyak and Juditsky (1992). Our analysis shows that it indeed usually takes a huge number of samples for ASGD to reach its asymptotic region for improperly chosen learning rate. More importantly, based on our analysis, we propose a simple way to properly set learning rate so that it takes a reasonable amount of data for ASGD to reach its asymptotic region. We compare ASGD using our proposed learning rate with other well known algorithms for training large scale linear classifiers. The experiments clearly show the superiority of ASGD.",
"title": ""
},
{
"docid": "64e93cfb58b7cf331b4b74fadb4bab74",
"text": "Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time. To improve scalability, we have developed a parallel SVM algorithm (PSVM), which reduces memory use through performing a row-based, approximate matrix factorization, and which loads only essential data to each machine to perform parallel computation. Let n denote the number of training instances, p the reduced matrix dimension after factorization (p is significantly smaller than n), and m the number of machines. PSVM reduces the memory requirement from O(n2) to O(np/m), and improves computation time to O(np2/m). Empirical study shows PSVM to be effective. PSVM Open Source is available for download at http://code.google.com/p/psvm/.",
"title": ""
}
] |
[
{
"docid": "73bf9a956ea7a10648851c85ef740db0",
"text": "Printed atmospheric spark gaps as ESD-protection on PCBs are examined. At first an introduction to the physic behind spark gaps. Afterward the time lag (response time) vs. voltage is measured with high load impedance. The dependable clamp voltage (will be defined later) is measured as a function of the load impedance and the local field in the air gap is simulated with FIT simulation software. At last the observed results are discussed on the basic of the physic and the simulations.",
"title": ""
},
{
"docid": "ee3b9382afc9455e53dd41d3725eb74a",
"text": "Deep convolutional neural networks have liberated its extraordinary power on various tasks. However, it is still very challenging to deploy stateof-the-art models into real-world applications due to their high computational complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and effective framework to learn and prune deep models in an end-to-end manner. In our framework, a new type of parameter – scaling factor is first introduced to scale the outputs of specific structures, such as neurons, groups or residual blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with other structure selection methods that may need thousands of trials or iterative fine-tuning, our method is trained fully end-to-end in one training pass without bells and whistles. We evaluate our method, Sparse Structure Selection with several state-of-the-art CNNs, and demonstrate very promising results with adaptive depth and width selection. Code is available at: https://github.com/huangzehao/ sparse-structure-selection.",
"title": ""
},
{
"docid": "31fc886990140919aabce17aa7774910",
"text": "Today, at the low end of the communication protocols we find the inter-integrated circuit (I2C) and the serial peripheral interface (SPI) protocols. Both protocols are well suited for communications between integrated circuits for slow communication with on-board peripherals. The two protocols coexist in modern digital electronics systems, and they probably will continue to compete in the future, as both I2C and SPI are actually quite complementary for this kind of communication.",
"title": ""
},
{
"docid": "48631b74c184f554f9c5692ed703d398",
"text": "Simultaneously and accurately forecasting the behavior of many interacting agents is imperative for computer vision applications to be widely deployed (e.g., autonomous vehicles, security, surveillance, sports). In this paper, we present a technique using conditional variational autoencoder which learns a model that “personalizes” prediction to individual agent behavior within a group representation. Given the volume of data available and its adversarial nature, we focus on the sport of basketball and show that our approach efficiently predicts context-specific agent motions. We find that our model generates results that are three times as accurate as previous state of the art approaches (5.74 ft vs. 17.95 ft).",
"title": ""
},
{
"docid": "dec9d40902f10c7eb5e627b7674fbd9f",
"text": "In the paper we propose a genetic algorithm based on insertion heuristics for the vehicle routing problem with constraints. A random insertion heuristic is used to construct initial solutions and to reconstruct the existing ones. The location where a randomly chosen node will be inserted is selected by calculating an objective function. The process of random insertion preserves stochastic characteristics of the genetic algorithm and preserves feasibility of generated individuals. The defined crossover and mutation operators incorporate random insertion heuristics, analyse individuals and select which parts should be reinserted. Additionally, the second population is used in the mutation process. The second population increases the probability that the solution, obtained in the mutation process, will survive in the first population and increase the probability to find the global optimum. The result comparison shows that the solutions, found by the proposed algorithm, are similar to the optimal solutions obtained by other genetic algorithms. However, in most cases the proposed algorithm finds the solution in a shorter time and it makes this algorithm competitive with others.",
"title": ""
},
{
"docid": "313271f587afe3224eaafc4243ab522f",
"text": "Treatment-induced chronic vaginal changes after definitive radio(chemo)therapy for locally advanced cervical cancer patients are reported as one of the most distressing consequences of treatment, with major impact on quality of life. Although these vaginal changes are regularly documented during gynecological follow-up examinations, the classic radiation morbidity grading scales are not concise in their reporting. The aim of the study was therefore to identify and qualitatively describe, on the basis of vaginoscopies, morphological changes in the vagina after definitive radio(chemo)therapy and to establish a classification system for their detailed and reproducible documentation. Vaginoscopy with photodocumentation was performed prospectively in 22 patients with locally advanced cervical cancer after definitive radio(chemo)therapy at 3–24 months after end of treatment. All patients were in complete remission and without severe grade 3/4 morbidity outside the vagina. Five morphological parameters, which occurred consistently after treatment, were identified: mucosal pallor, telangiectasia, fragility of the vaginal wall, ulceration, and adhesions/occlusion. The symptoms in general were observed at different time points in individual patients; their quality was independent of the time of assessment. Based on the morphological findings, a comprehensive descriptive and semiquantitative scoring system was developed, which allows for classification of vaginal changes. A photographic atlas to illustrate the morphology of the alterations is presented. Vaginoscopy is an easily applicable, informative, and well-tolerated procedure for the objective assessment of morphological vaginal changes after radio(chemo)therapy and provides comprehensive and detailed information. This allows for precise classification of the severity of individual changes. Veränderungen der Vagina werden von Patientinnen nach definitiver Radio(chemo)therapie bei lokal fortgeschrittenem Zervixkarzinom als äußerst belastende chronische Morbidität beschrieben, welche die Lebensqualität signifikant beeinträchtigen kann. Obwohl diese vaginalen Nebenwirkungen routinemäßig in den gynäkologischen Nachsorgeuntersuchungen erfasst werden, werden sie in den klassischen Dokumentationssystemen für Nebenwirkungen der Strahlentherapie nur sehr unpräzise abgebildet. Ziele der vorliegenden Studie waren daher die Identifikation und qualitative Beschreibung morphologischer Veränderungen der Vagina nach definitiver Radio(chemo)therapie anhand von vaginoskopischen Bildern und die Etablierung eines spezifischen Klassifikationssystems für eine detaillierte und reproduzierbare Dokumentation. Von 22 Patientinnen mit lokal fortgeschrittenem Zervixkarzinom wurden vaginoskopische Bilder in einem Zeitraum von 3–24 Monaten nach definitiver Radio(chemo)therapie angefertigt. Alle Patientinnen waren in kompletter Remission und wiesen keine schwerwiegenden G3- oder G4-Nebenwirkungen außerhalb der Vagina auf. Es wurden regelhaft 5 morphologische Parameter bei den Patientinnen nach Radio(chemo)therapie beobachtet: Blässe der Schleimhaut, Teleangiektasien, Fragilität der Vaginalwand, Ulzerationen und Verklebungen bzw. Okklusionen. Diese Endpunkte wurden in den einzelnen Patientinnen zu verschiedenen Zeitpunkten gefunden, waren in ihrer Qualität aber unabhängig vom Zeitpunkt der Erhebung. Aufbauend auf diesen morphologischen Befunden wurde ein umfassendes deskriptives und semiquantitatives Beurteilungssystem für die Klassifikation vaginaler Nebenwirkungen entwickelt. Ein Bildatlas, der die Morphologie der Veränderungen illustriert, wird präsentiert. Die Vaginoskopie ist eine einfach anzuwendende, informative und von den Patientinnen gut tolerierte Untersuchungsmethode. Sie ist geeignet, morphologische Veränderungen der Vagina nach Radio(chemo)therapie objektiv zu erheben, und liefert umfassende und detaillierte Informationen, die eine präzise und reproduzierbare Klassifikation erlauben.",
"title": ""
},
{
"docid": "d95fb46b3857b55602af2cf271300f5a",
"text": "This paper proposes a new active interphase transformer for 24-pulse diode rectifier. The proposed scheme injects a compensation current into the secondary winding of either of the two first-stage interphase transformers. For only one of the first-stage interphase transformers being active, the inverter conducted the injecting current is with a lower kVA rating [1.26% pu (Po)] compared to conventional active interphase transformers. Moreover, the proposal scheme draws near sinusoidal input currents and the simulated and the experimental total harmonic distortion of overall line currents are only 1.88% and 2.27% respectively. When the inverter malfunctions, the input line current still can keep in the conventional 24-pulse situation. A digital-signal-processor (DSP) based digital controller is employed to calculate the desired compensation current and deals with the trigger signals needed for the inverter. Moreover, a 6kW prototype is built for test. Both simulation and experimental results demonstrate the validity of the proposed scheme.",
"title": ""
},
{
"docid": "59693182ac2803d821c508e92383d499",
"text": "We introduce the notion of image-driven simplification, a framework that uses images to decide which portions of a model to simplify. This is a departure from approaches that make polygonal simplification decisions based on geometry. As with many methods, we use the edge collapse operator to make incremental changes to a model. Unique to our approach, however, is the use at comparisons between images of the original model against those of a simplified model to determine the cost of an ease collapse. We use common graphics rendering hardware to accelerate the creation of the required images. As expected, this method produces models that are close to the original model according to image differences. Perhaps more surprising, however, is that the method yields models that have high geometric fidelity as well. Our approach also solves the quandary of how to weight the geometric distance versus appearance properties such as normals, color, and texture. All of these trade-offs are balanced by the image metric. Benefits of this approach include high fidelity silhouettes, extreme simplification of hidden portions of a model, attention to shading interpolation effects, and simplification that is sensitive to the content of a texture. In order to better preserve the appearance of textured models, we introduce a novel technique for assigning texture coordinates to the new vertices of the mesh. This method is based on a geometric heuristic that can be integrated with any edge collapse algorithm to produce high quality textured surfaces.",
"title": ""
},
{
"docid": "984dc75b97243e448696f2bf0ba3c2aa",
"text": "Background: Predicting credit card payment default is critical for the successful business model of a credit card company. An accurate predictive model can help the company identify customers who might default their payment in the future so that the company can get involved earlier to manage risk and reduce loss. It is even better if a model can assist the company on credit card application approval to minimize the risk at upfront. However, credit card default prediction is never an easy task. It is dynamic. A customer who paid his/her payment on time in the last few months may suddenly default his/her next payment. It is also unbalanced given the fact that default payment is rare compared to non-default payments. Unbalanced dataset will easily fail using most machine learning techniques if the dataset is not treated properly.",
"title": ""
},
{
"docid": "323c217fa6e4b0c097779379d8ca8561",
"text": "Photosynthetic antenna complexes capture and concentrate solar radiation by transferring the excitation to the reaction center that stores energy from the photon in chemical bonds. This process occurs with near-perfect quantum efficiency. Recent experiments at cryogenic temperatures have revealed that coherent energy transfer--a wave-like transfer mechanism--occurs in many photosynthetic pigment-protein complexes. Using the Fenna-Matthews-Olson antenna complex (FMO) as a model system, theoretical studies incorporating both incoherent and coherent transfer as well as thermal dephasing predict that environmentally assisted quantum transfer efficiency peaks near physiological temperature; these studies also show that this mechanism simultaneously improves the robustness of the energy transfer process. This theory requires long-lived quantum coherence at room temperature, which never has been observed in FMO. Here we present evidence that quantum coherence survives in FMO at physiological temperature for at least 300 fs, long enough to impact biological energy transport. These data prove that the wave-like energy transfer process discovered at 77 K is directly relevant to biological function. Microscopically, we attribute this long coherence lifetime to correlated motions within the protein matrix encapsulating the chromophores, and we find that the degree of protection afforded by the protein appears constant between 77 K and 277 K. The protein shapes the energy landscape and mediates an efficient energy transfer despite thermal fluctuations.",
"title": ""
},
{
"docid": "d81fd54d3f1d005e71c7a5da1679b04a",
"text": "This report illustrates photographically the adverse influence of short wavelength-induced light-scattering and autofluorescence on image quality, and the improvement of image quality that results by filtering out light wavelengths shorter than 480 nm. It provides additional data on the improvement in human vision (under conditions of excessive intraocular light-scattering and fluorescence) by filters that prevent short wavelength radiant energy from entering the eye.",
"title": ""
},
{
"docid": "e1e74832bdc8a675c342b868b80bf1e4",
"text": "Many network phenomena are well modeled as spreads of epidemics through a network. Prominent examples include the spread of worms and email viruses, and, more generally, faults. Many types of information dissemination can also be modeled as spreads of epidemics. In this paper we address the question of what makes an epidemic either weak or potent. More precisely, we identify topological properties of the graph that determine the persistence of epidemics. In particular, we show that if the ratio of cure to infection rates is larger than the spectral radius of the graph, then the mean epidemic lifetime is of order log n, where n is the number of nodes. Conversely, if this ratio is smaller than a generalization of the isoperimetric constant of the graph, then the mean epidemic lifetime is of order e/sup na/, for a positive constant a. We apply these results to several network topologies including the hypercube, which is a representative connectivity graph for a distributed hash table, the complete graph, which is an important connectivity graph for BGP, and the power law graph, of which the AS-level Internet graph is a prime example. We also study the star topology and the Erdos-Renyi graph as their epidemic spreading behaviors determine the spreading behavior of power law graphs.",
"title": ""
},
{
"docid": "76715b342c0b0a475ba6db06a0345c7b",
"text": "Generalized linear mixed models are a widely used tool for modeling longitudinal data. However , their use is typically restricted to few covariates, because the presence of many predictors yields unstable estimates. The presented approach to the fitting of generalized linear mixed models includes an L 1-penalty term that enforces variable selection and shrinkage simultaneously. A gradient ascent algorithm is proposed that allows to maximize the penalized log-likelihood yielding models with reduced complexity. In contrast to common procedures it can be used in high-dimensional settings where a large number of potentially influential explanatory variables is available. The method is investigated in simulation studies and illustrated by use of real data sets.",
"title": ""
},
{
"docid": "701ddde2a7ff66c6767a2978ce7293f2",
"text": "Epigenetics is the study of heritable changesin gene expression that does not involve changes to theunderlying DNA sequence, i.e. a change in phenotype notinvolved by a change in genotype. At least three mainfactor seems responsible for epigenetic change including DNAmethylation, histone modification and non-coding RNA, eachone sharing having the same property to affect the dynamicof the chromatin structure by acting on Nucleosomes position. A nucleosome is a DNA-histone complex, where around150 base pairs of double-stranded DNA is wrapped. Therole of nucleosomes is to pack the DNA into the nucleusof the Eukaryote cells, to form the Chromatin. Nucleosomepositioning plays an important role in gene regulation andseveral studies shows that distinct DNA sequence featureshave been identified to be associated with nucleosomepresence. Starting from this suggestion, the identificationof nucleosomes on a genomic scale has been successfullyperformed by DNA sequence features representation andclassical supervised classification methods such as SupportVector Machines, Logistic regression and so on. Taking inconsideration the successful application of the deep neuralnetworks on several challenging classification problems, inthis paper we want to study how deep learning network canhelp in the identification of nucleosomes.",
"title": ""
},
{
"docid": "a6cf86ffa90c74b7d7d3254c7d33685a",
"text": "Graph-based methods are known to be successful in many machine learning and pattern classification tasks. These methods consider semistructured data as graphs where nodes correspond to primitives (parts, interest points, and segments) and edges characterize the relationships between these primitives. However, these nonvectorial graph data cannot be straightforwardly plugged into off-the-shelf machine learning algorithms without a preliminary step of--explicit/implicit--graph vectorization and embedding. This embedding process should be resilient to intraclass graph variations while being highly discriminant. In this paper, we propose a novel high-order stochastic graphlet embedding that maps graphs into vector spaces. Our main contribution includes a new stochastic search procedure that efficiently parses a given graph and extracts/samples unlimitedly high-order graphlets. We consider these graphlets, with increasing orders, to model local primitives as well as their increasingly complex interactions. In order to build our graph representation, we measure the distribution of these graphlets into a given graph, using particular hash functions that efficiently assign sampled graphlets into isomorphic sets with a very low probability of collision. When combined with maximum margin classifiers, these graphlet-based representations have a positive impact on the performance of pattern comparison and recognition as corroborated through extensive experiments using standard benchmark databases.",
"title": ""
},
{
"docid": "73a8c38d820e204c6993974fb352d33f",
"text": "Many continuous control tasks have bounded action spaces. When policy gradient methods are applied to such tasks, out-of-bound actions need to be clipped before execution, while policies are usually optimized as if the actions are not clipped. We propose a policy gradient estimator that exploits the knowledge of actions being clipped to reduce the variance in estimation. We prove that our estimator, named clipped action policy gradient (CAPG), is unbiased and achieves lower variance than the conventional estimator that ignores action bounds. Experimental results demonstrate that CAPG generally outperforms the conventional estimator, indicating that it is a better policy gradient estimator for continuous control tasks. The source code is available at https: //github.com/pfnet-research/capg.",
"title": ""
},
{
"docid": "e6f75423017585cf7e65b316fd20c3f0",
"text": "Blockchain, as a mechanism to decentralize services, security, and verifiability, offers a peer-to-peer system in which distributed nodes collaboratively affirm transaction provenance. In particular, blockchain enforces continuous storage of transaction history, secured via digital signature, and affirmed through consensus. In this study, we consider the recent surge in blockchain interest as an alternative to traditional centralized systems, and consider the emerging applications thereof. In particular, we assess the key techniques required for blockchain implementation, offering a primer to guide research practitioners. We first outline the blockchain framework in general, and then provide a detailed review of the component data and network structures. Additionally, we consider the breadth of applications to which blockchain has been applied, broadly implicating Internet of Things (IoT), Big Data, and Cloud and Edge computing paradigms, along with many other emerging applications. Finally, we assess the various challenges to blockchain implementation for widespread practical use, considering the security vulnerabilities to majority attacks, selfish mining, and privacy leakage, as well as performance limitations of blockchain platforms in terms of scalability and availability.",
"title": ""
},
{
"docid": "3ee39231fc2fbf3b6295b1b105a33c05",
"text": "We address a text regression problem: given a piece of text, predict a real-world continuous quantity associated with the text’s meaning. In this work, the text is an SEC-mandated financial report published annually by a publiclytraded company, and the quantity to be predicted is volatility of stock returns, an empirical measure of financial risk. We apply wellknown regression techniques to a large corpus of freely available financial reports, constructing regression models of volatility for the period following a report. Our models rival past volatility (a strong baseline) in predicting the target variable, and a single model that uses both can significantly outperform past volatility. Interestingly, our approach is more accurate for reports after the passage of the Sarbanes-Oxley Act of 2002, giving some evidence for the success of that legislation in making financial reports more informative.",
"title": ""
},
{
"docid": "2a1f1576ab73e190dce400dedf80df36",
"text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading motivation reconsidered the concept of competence is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.",
"title": ""
},
{
"docid": "db3b14f6298771b44506a17da57c21ae",
"text": "Virtuosos are human beings who exhibit exceptional performance in their field of activity. In particular, virtuosos are interesting for creativity studies because they are exceptional problem solvers. However, virtuosity is an under-studied field of human behaviour. Little is known about the processes involved to become a virtuoso, and in how they distinguish themselves from normal performers. Virtuosos exist in virtually all domains of human activities, and we focus in this chapter on the specific case of virtuosity in jazz improvisation. We first introduce some facts about virtuosos coming from physiology, and then focus on the case of jazz. Automatic generation of improvisation has long been a subject of study for computer science, and many techniques have been proposed to generate music improvisation in various genres. The jazz style in particular abounds with programs that create improvisations of a reasonable level. However, no approach so far exhibits virtuosolevel performance. We describe an architecture for the generation of virtuoso bebop phrases which integrates novel music generation mechanisms in a principled way. We argue that modelling such outstanding phenomena can contribute substantially to the understanding of creativity in humans and machines. 5.1 Virtuosos as Exceptional Humans 5.1.1 Virtuosity in Art There is no precise definition of virtuosity, but only a commonly accepted view that virtuosos are human beings that excel in their practice to the point of exhibiting exceptional performance. Virtuosity exists in virtually all forms of human activity. In painting, several artists use virtuosity as a means to attract the attention of their audience. Felice Varini paints on urban spaces in such a way that there is a unique viewpoint from which a spectator sees the painting as a perfect geometrical figure. The F. Pachet ( ) Sony CSL-Paris, 6, rue Amyot, 75005 Paris, France e-mail: [email protected] J. McCormack, M. d’Inverno (eds.), Computers and Creativity, DOI 10.1007/978-3-642-31727-9_5, © Springer-Verlag Berlin Heidelberg 2012 115",
"title": ""
}
] |
scidocsrr
|
7c8401c55239df878548d668281024e4
|
The Problem of Trusted Third Party in Authentication and Digital Signature Protocols
|
[
{
"docid": "59308c5361d309568a94217c79cf0908",
"text": "Want to get experience? Want to get any ideas to create new things in your life? Read cryptography an introduction to computer security now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.",
"title": ""
}
] |
[
{
"docid": "32d0a26f21a25fe1e783b1edcfbcf673",
"text": "Histologic grading has been used as a guide for clinical management in follicular lymphoma (FL). Proliferation index (PI) of FL generally correlates with tumor grade; however, in cases of discordance, it is not clear whether histologic grade or PI correlates with clinical aggressiveness. To objectively evaluate these cases, we determined PI by Ki-67 immunostaining in 142 cases of FL (48 grade 1, 71 grade 2, and 23 grade 3). A total of 24 cases FL with low histologic grade but high PI (LG-HPI) were identified, a frequency of 18%. On histologic examination, LG-HPI FL often exhibited blastoid features. Patients with LG-HPI FL had inferior disease-specific survival but a higher 5-year disease-free rate than low-grade FL with concordantly low PI (LG-LPI). However, transformation to diffuse large B-cell lymphoma was uncommon in LG-HPI cases (1 of 19; 5%) as compared with LG-LPI cases (27 of 74; 36%). In conclusion, LG-HPI FL appears to be a subgroup of FL with clinical behavior more akin to grade 3 FL. We propose that these LG-HPI FL cases should be classified separately from cases of low histologic grade FL with concordantly low PI.",
"title": ""
},
{
"docid": "f10353fe0c78877a6e78509badba9fcd",
"text": "Chronic Wounds are ulcers presenting a difficult or nearly interrupted cicatrization process that increase the risk of complications to the health of patients, like amputation and infections. This research proposes a general noninvasive methodology for the segmentation and analysis of chronic wounds images by computing the wound areas affected by necrosis. Invasive techniques are usually used for this calculation, such as manual planimetry with plastic films. We investigated algorithms to perform the segmentation of wounds as well as the use of several convolutional networks for classifying tissue as Necrotic, Granulation or Slough. We tested four architectures: U-Net, Segnet, FCN8 and FCN32, and proposed a color space reduction methodology that increased the reported accuracies, specificities, sensitivities and Dice coefficients for all 4 networks, achieving very good levels.",
"title": ""
},
{
"docid": "b32d6bc2d14683c4bf3557dad560edca",
"text": "In this paper, we describe the fabrication and testing of a stretchable fabric sleeve with embedded elastic strain sensors for state reconstruction of a soft robotic joint. The strain sensors are capacitive and composed of graphite-based conductive composite electrodes and a silicone elastomer dielectric. The sensors are screenprinted directly into the fabric sleeve, which contrasts the approach of pre-fabricating sensors and subsequently attaching them to a host. We demonstrate the capabilities of the sensor-embedded fabric sleeve by determining the joint angle and end effector position of a soft pneumatic joint with similar accuracy to a traditional IMU. Furthermore, we show that the sensory sleeve is capable of capturing more complex material states, such as fabric buckling and non-constant curvatures along linkages and joints.",
"title": ""
},
{
"docid": "999070b182a328b1927be4575f04e434",
"text": "Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.",
"title": ""
},
{
"docid": "df6d4e6d74d96b7ab1951cc869caad59",
"text": "A broadband commonly fed antenna with dual polarization is proposed in this letter. The main radiator of the antenna is designed as a loop formed by four staircase-like branches. In this structure, the 0° polarization and 90° polarization share the same radiator and reflector. Measurement shows that the proposed antenna obtains a broad impedance bandwidth of 70% (1.5–3.1 GHz) with <inline-formula><tex-math notation=\"LaTeX\">$\\vert {{S}}_{11}\\vert < -{\\text{10 dB}}$</tex-math></inline-formula> and a high port-to-port isolation of 35 dB. The antenna gain within the operating frequency band is between 7.2 and 9.5 dBi, which indicates a stable broadband radiation performance. Moreover, a high cross-polarization discrimination of 25 dB is achieved across the whole operating frequency band.",
"title": ""
},
{
"docid": "04d5824991ada6194f3028a900d7f31b",
"text": "In this work, we present a solution to real-time monocular dense mapping. A tightly-coupled visual-inertial localization module is designed to provide metric and high-accuracy odometry. A motion stereo algorithm is proposed to take the video input from one camera to produce local depth measurements with semi-global regularization. The local measurements are then integrated into a global map for noise filtering and map refinement. The global map obtained is able to support navigation and obstacle avoidance for aerial robots through our indoor and outdoor experimental verification. Our system runs at 10Hz on an Nvidia Jetson TX1 by properly distributing computation to CPU and GPU. Through onboard experiments, we demonstrate its ability to close the perception-action loop for autonomous aerial robots. We release our implementation as open-source software1.",
"title": ""
},
{
"docid": "e294307ea4108d8cf467585f27d3a48b",
"text": "Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs) to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.",
"title": ""
},
{
"docid": "ade9860157680b2ca6820042f0cda302",
"text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &",
"title": ""
},
{
"docid": "40ebf37907d738dd64b5a87b93b4a432",
"text": "Deep learning has led to many breakthroughs in machine perception and data mining. Although there are many substantial advances of deep learning in the applications of image recognition and natural language processing, very few work has been done in video analysis and semantic event detection. Very deep inception and residual networks have yielded promising results in the 2014 and 2015 ILSVRC challenges, respectively. Now the question is whether these architectures are applicable to and computationally reasonable in a variety of multimedia datasets. To answer this question, an efficient and lightweight deep convolutional network is proposed in this paper. This network is carefully designed to decrease the depth and width of the state-of-the-art networks while maintaining the high-performance. The proposed deep network includes the traditional convolutional architecture in conjunction with residual connections and very light inception modules. Experimental results demonstrate that the proposed network not only accelerates the training procedure, but also improves the performance in different multimedia classification tasks.",
"title": ""
},
{
"docid": "bc5c008b5e443b83b2a66775c849fffb",
"text": "Continuous glucose monitoring (CGM) sensors are portable devices that allow measuring and visualizing the glucose concentration in real time almost continuously for several days and are provided with hypo/hyperglycemic alerts and glucose trend information. CGM sensors have revolutionized Type 1 diabetes (T1D) management, improving glucose control when used adjunctively to self-monitoring blood glucose systems. Furthermore, CGM devices have stimulated the development of applications that were impossible to create without a continuous-time glucose signal, e.g., real-time predictive alerts of hypo/hyperglycemic episodes based on the prediction of future glucose concentration, automatic basal insulin attenuation methods for hypoglycemia prevention, and the artificial pancreas. However, CGM sensors' lack of accuracy and reliability limited their usability in the clinical practice, calling upon the academic community for the development of suitable signal processing methods to improve CGM performance. The aim of this paper is to review the past and present algorithmic challenges of CGM sensors, to show how they have been tackled by our research group, and to identify the possible future ones.",
"title": ""
},
{
"docid": "f8b201105e3b92ed4ef2a884cb626c0d",
"text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.",
"title": ""
},
{
"docid": "712be4d6aabf8e76b050c30e6241ad0f",
"text": "The United States, like many nations, continues to experience rapid growth in its racial minority population and is projected to attain so-called majority-minority status by 2050. Along with these demographic changes, staggering racial disparities persist in health, wealth, and overall well-being. In this article, we review the social psychological literature on race and race relations, beginning with the seemingly simple question: What is race? Drawing on research from different fields, we forward a model of race as dynamic, malleable, and socially constructed, shifting across time, place, perceiver, and target. We then use classic theoretical perspectives on intergroup relations to frame and then consider new questions regarding contemporary racial dynamics. We next consider research on racial diversity, focusing on its effects during interpersonal encounters and for groups. We close by highlighting emerging topics that should top the research agenda for the social psychology of race and race relations in the twenty-first century.",
"title": ""
},
{
"docid": "1d56b3aa89484e3b25557880ec239930",
"text": "We present an FPGA accelerator for the Non-uniform Fast Fourier Transform, which is a technique to reconstruct images from arbitrarily sampled data. We accelerate the compute-intensive interpolation step of the NuFFT Gridding algorithm by implementing it on an FPGA. In order to ensure efficient memory performance, we present a novel FPGA implementation for Geometric Tiling based sorting of the arbitrary samples. The convolution is then performed by a novel Data Translation architecture which is composed of a multi-port local memory, dynamic coordinate-generator and a plug-and-play kernel pipeline. Our implementation is in single-precision floating point and has been ported onto the BEE3 platform. Experimental results show that our FPGA implementation can generate fairly high performance without sacrificing flexibility for various data-sizes and kernel functions. We demonstrate up to 8X speedup and up to 27 times higher performance-per-watt over a comparable CPU implementation and up to 20% higher performance-per-watt when compared to a relevant GPU implementation.",
"title": ""
},
{
"docid": "6504562f140b49d412446817e76383e8",
"text": "As more businesses realized that data, in all forms and sizes, is critical to making the best possible decisions, we see the continued growth of systems that support massive volume of non-relational or unstructured forms of data. Nothing shows the picture more starkly than the Gartner Magic quadrant for operational database management systems, which assumes that, by 2017, all leading operational DBMSs will offer multiple data models, relational and NoSQL, in a single DBMS platform. Having a single data platform for managing both well-structured data and NoSQL data is beneficial to users; this approach reduces significantly integration, migration, development, maintenance, and operational issues. Therefore, a challenging research work is how to develop efficient consolidated single data management platform covering both relational data and NoSQL to reduce integration issues, simplify operations, and eliminate migration issues. In this tutorial, we review the previous work on multi-model data management and provide the insights on the research challenges and directions for future work. The slides and more materials of this tutorial can be found at http://udbms.cs.helsinki.fi/?tutorials/edbt2017.",
"title": ""
},
{
"docid": "660465cbd4bd95108a2381ee5a97cede",
"text": "In this paper we discuss the design and implementation of an automated usability evaluation method for iOS applications. In contrast to common usability testing methods, it is not explicitly necessary to involve an expert or subjects. These circumstances reduce costs, time and personnel expenditures. Professionals are replaced by the automation tool while test participants are exchanged with consumers of the launched application. Interactions of users are captured via a fully automated capturing framework which creates a record of user interactions for each session and sends them to a central server. A usability problem is defined as a sequence of interactions and pattern recognition specified by interaction design patterns is applied to find these problems. Nevertheless, it falls back to the user input for accurate results. Similar to the problem, the solution of the problem is based on the HCI design pattern. An evaluation shows the functionality of our approach compared to a traditional usability evaluation method.",
"title": ""
},
{
"docid": "7d1faee4929d60d952cc8c2c12fa16d3",
"text": "We recently showed that improved perceptual performance on a visual motion direction–discrimination task corresponds to changes in how an unmodified sensory representation in the brain is interpreted to form a decision that guides behavior. Here we found that these changes can be accounted for using a reinforcement-learning rule to shape functional connectivity between the sensory and decision neurons. We modeled performance on the basis of the readout of simulated responses of direction-selective sensory neurons in the middle temporal area (MT) of monkey cortex. A reward prediction error guided changes in connections between these sensory neurons and the decision process, first establishing the association between motion direction and response direction, and then gradually improving perceptual sensitivity by selectively strengthening the connections from the most sensitive neurons in the sensory population. The results suggest a common, feedback-driven mechanism for some forms of associative and perceptual learning.",
"title": ""
},
{
"docid": "3eec1e9abcb677a4bc8f054fa8827f4f",
"text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.",
"title": ""
},
{
"docid": "0c025ec05a1f98d71c9db5bfded0a607",
"text": "Many organizations, such as banks, airlines, telecommunications companies, and police departments, routinely use queueing models to help determine capacity levels needed to respond to experienced demands in a timely fashion. Though queueing analysis has been used in hospitals and other healthcare settings, its use in this sector is not widespread. Yet, given the pervasiveness of delays in healthcare and the fact that many healthcare facilities are trying to meet increasing demands with tightly constrained resources, queueing models can be very useful in developing more effective policies for bed allocation and staffing, and in identifying other opportunities for improving service. Queueing analysis is also a key tool in estimating capacity requirements for possible future scenarios, including demand surges due to new diseases or acts of terrorism. This chapter describes basic queueing models as well as some simple modifications and extensions that are particularly useful in the healthcare setting, and give examples of their use. The critical issue of data requirements is also be discussed as well as model choice, modelbuilding and the interpretation and use of results.",
"title": ""
},
{
"docid": "f5e934d65fa436cdb8e5cfa81ea29028",
"text": "Recently, there has been substantial research on augmenting aggregate forecasts with individual consumer data from internet platforms, such as search traffic or social network shares. Although the majority of studies report increased accuracy, many exhibit design weaknesses including lack of adequate benchmarks or rigorous evaluation. Furthermore, their usefulness over the product life-cycle has not been investigated, which may change, as initially, consumers may search for pre-purchase information, but later for after-sales support. In this study, we first review the relevant literature and then attempt to support the key findings using two forecasting case studies. Our findings are in stark contrast to the literature, and we find that established univariate forecasting benchmarks, such as exponential smoothing, consistently perform better than when online information is included. Our research underlines the need for thorough forecast evaluation and argues that online platform data may be of limited use for supporting operational decisions.",
"title": ""
},
{
"docid": "188e971e34192af93c36127b69d89064",
"text": "1 1 This paper has been revised and extended from the authors' previous work [23][24][25]. ABSTRACT Ontology mapping seeks to find semantic correspondences between similar elements of different ontologies. It is a key challenge to achieve semantic interoperability in building the Semantic Web. This paper proposes a new generic and adaptive ontology mapping approach, called the PRIOR+, based on propagation theory, information retrieval techniques and artificial intelligence. The approach consists of three major modules, i.e., the IR-based similarity generator, the adaptive similarity filter and weighted similarity aggregator, and the neural network based constraint satisfaction solver. The approach first measures both linguistic and structural similarity of ontologies in a vector space model, and then aggregates them using an adaptive method based on their harmonies, which is defined as an estimator of performance of similarity. Finally to improve mapping accuracy the interactive activation and competition neural network is activated, if necessary, to search for a solution that can satisfy ontology constraints. The experimental results show that harmony is a good estimator of f-measure; the harmony based adaptive aggregation outperforms other aggregation methods; neural network approach significantly boosts the performance in most cases. Our approach is competitive with top ranked systems on benchmark tests at OAEI campaign 2007, and performs the best on real cases in OAEI benchmark tests.",
"title": ""
}
] |
scidocsrr
|
63353dfe47623fca110fe9eb341f4d5c
|
Extracting general-purpose features from LIDAR data
|
[
{
"docid": "59f3c511765c52702b9047a688256532",
"text": "Mobile robots are dependent upon a model of the environment for many of their basic functions. Locally accurate maps are critical to collision avoidance, while large-scale maps (accurate both metrically and topologically) are necessary for efficient route planning. Solutions to these problems have immediate and important applications to autonomous vehicles, precision surveying, and domestic robots. Building accurate maps can be cast as an optimization problem: find the map that is most probable given the set of observations of the environment. However, the problem rapidly becomes difficult when dealing with large maps or large numbers of observations. Sensor noise and non-linearities make the problem even more difficult— especially when using inexpensive (and therefore preferable) sensors. This thesis describes an optimization algorithm that can rapidly estimate the maximum likelihood map given a set of observations. The algorithm, which iteratively reduces map error by considering a single observation at a time, scales well to large environments with many observations. The approach is particularly robust to noise and non-linearities, quickly escaping local minima that trap current methods. Both batch and online versions of the algorithm are described. In order to build a map, however, a robot must first be able to recognize places that it has previously seen. Limitations in sensor processing algorithms, coupled with environmental ambiguity, make this difficult. Incorrect place recognitions can rapidly lead to divergence of the map. This thesis describes a place recognition algorithm that can robustly handle ambiguous data. We evaluate these algorithms on a number of challenging datasets and provide quantitative comparisons to other state-of-the-art methods, illustrating the advantages of our methods.",
"title": ""
}
] |
[
{
"docid": "803b3d29c5514865cd8e17971f2dd8d6",
"text": "This paper comprehensively analyzes the relationship between space-vector modulation and three-phase carrier-based pulsewidth modualtion (PWM). The relationships involved, such as the relationship between modulation signals (including zero-sequence component and fundamental components) and space vectors, the relationship between the modulation signals and the space-vector sectors, the relationship between the switching pattern of space-vector modulation and the type of carrier, and the relationship between the distribution of zero vectors and different zero-sequence signal are systematically established. All the relationships provide a bidirectional bridge for the transformation between carrier-based PWM modulators and space-vector modulation modulators. It is shown that all the drawn conclusions are independent of the load type. Furthermore, the implementations of both space-vector modulation and carrier-based PWM in a closed-loop feedback converter are discussed.",
"title": ""
},
{
"docid": "6f34ef57fcf0a2429e7dc2a3e56a99fd",
"text": "Service-Oriented Architecture (SOA) provides a flexible framework for service composition. Using standard-based protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some application-dependent performance requirements. We design a broker-based architecture to facilitate the selection of QoS-based services. The objective of service selection is to maximize an application-specific utility function under the end-to-end QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 0-1 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.",
"title": ""
},
{
"docid": "f29d0ea5ff5c96dadc440f4d4aa229c6",
"text": "Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. However, infobox information is very incomplete and imbalanced among the Wikipedias in different languages. It is a promising but challenging problem to utilize the rich structured knowledge from a source language Wikipedia to help complete the missing infoboxes for a target language. In this paper, we formulate the problem of cross-lingual knowledge extraction from multilingual Wikipedia sources, and present a novel framework, called WikiCiKE, to solve this problem. An instancebased transfer learning method is utilized to overcome the problems of topic drift and translation errors. Our experimental results demonstrate that WikiCiKE outperforms the monolingual knowledge extraction method and the translation-based method.",
"title": ""
},
{
"docid": "8a85d05f4ed31d3dba339bb108b39ba4",
"text": "Access to genetic and genomic resources can greatly facilitate biological understanding of plant species leading to improved crop varieties. While model plant species such as Arabidopsis have had nearly two decades of genetic and genomic resource development, many major crop species have seen limited development of these resources due to the large, complex nature of their genomes. Cultivated potato is among the ranks of crop species that, despite substantial worldwide acreage, have seen limited genetic and genomic tool development. As technologies advance, this paradigm is shifting and a number of tools are being developed for important crop species such as potato. This review article highlights numerous tools that have been developed for the potato community with a specific focus on the reference de novo genome assembly and annotation, genetic markers, transcriptomics resources, and newly emerging resources that extend beyond a single reference individual. El acceso a los recursos genéticos y genómicos puede facilitar en gran medida el entendimiento biológico de las especies de plantas, lo que conduce a variedades mejoradas de cultivos. Mientras que el modelo de las especies de plantas como Arabidopsis ha tenido cerca de dos décadas de desarrollo de recursos genéticos y genómicos, muchas especies de cultivos principales han visto desarrollo limitado de estos recursos debido a la naturaleza grande, compleja, de sus genomios. La papa cultivada está ubicada entre las especies de plantas que a pesar de su superficie substancial mundial, ha visto limitado el desarrollo de las herramientas genéticas y genómicas. A medida que avanzan las tecnologías, este paradigma está girando y se han estado desarrollando un número de herramientas para especies importantes de cultivo tales como la papa. Este artículo de revisión resalta las numerosas herramientas que se han desarrollado para la comunidad de la papa con un enfoque específico en la referencia de ensamblaje y registro de genomio de novo, marcadores genéticos, recursos transcriptómicos, y nuevas fuentes emergentes que se extienden más allá de la referencia de un único individuo.",
"title": ""
},
{
"docid": "5a18a7f42ab40cd238c92e19d23e0550",
"text": "As memory scales down to smaller technology nodes, new failure mechanisms emerge that threaten its correct operation. If such failure mechanisms are not anticipated and corrected, they can not only degrade system reliability and availability but also, perhaps even more importantly, open up security vulnerabilities: a malicious attacker can exploit the exposed failure mechanism to take over the entire system. As such, new failure mechanisms in memory can become practical and significant threats to system security. In this work, we discuss the RowHammer problem in DRAM, which is a prime (and perhaps the first) example of how a circuit-level failure mechanism in DRAM can cause a practical and widespread system security vulnerability. RowHammer, as it is popularly referred to, is the phenomenon that repeatedly accessing a row in a modern DRAM chip causes bit flips in physically-adjacent rows at consistently predictable bit locations. It is caused by a hardware failure mechanism called DRAM disturbance errors, which is a manifestation of circuit-level cell-to-cell interference in a scaled memory technology. Researchers from Google Project Zero recently demonstrated that this hardware failure mechanism can be effectively exploited by user-level programs to gain kernel privileges on real systems. Several other recent works demonstrated other practical attacks exploiting RowHammer. These include remote takeover of a server vulnerable to RowHammer, takeover of a victim virtual machine by another virtual machine running on the same system, and takeover of a mobile device by a malicious user-level application that requires no permissions. We analyze the root causes of the RowHammer problem and examine various solutions. We also discuss what other vulnerabilities may be lurking in DRAM and other types of memories, e.g., NAND flash memory or Phase Change Memory, that can potentially threaten the foundations of secure systems, as the memory technologies scale to higher densities. We conclude by describing and advocating a principled approach to memory reliability and security research that can enable us to better anticipate and prevent such vulnerabilities.",
"title": ""
},
{
"docid": "9847518e92a8f1b6cef2365452b01008",
"text": "This paper presents a Planar Inverted F Antenna (PIFA) tuned with a fixed capacitor to the low frequency bands supported by the Long Term Evolution (LTE) technology. The tuning range is investigated and optimized with respect to the bandwidth and the efficiency of the resulting antenna. Simulations and mock-ups are presented.",
"title": ""
},
{
"docid": "910a3be33d479be4ed6e7e44a56bb8fb",
"text": "Support vector machine (SVM) is a supervised machine learning approach that was recognized as a statistical learning apotheosis for the small-sample database. SVM has shown its excellent learning and generalization ability and has been extensively employed in many areas. This paper presents a performance analysis of six types of SVMs for the diagnosis of the classical Wisconsin breast cancer problem from a statistical point of view. The classification performance of standard SVM (St-SVM) is analyzed and compared with those of the other modified classifiers such as proximal support vector machine (PSVM) classifiers, Lagrangian support vector machines (LSVM), finite Newton method for Lagrangian support vector machine (NSVM), Linear programming support vector machines (LPSVM), and smooth support vector machine (SSVM). The experimental results reveal that these SVM classifiers achieve very fast, simple, and efficient breast cancer diagnosis. The training results indicated that LSVM has the lowest accuracy of 95.6107 %, while St-SVM performed better than other methods for all performance indices (accuracy = 97.71 %) and is closely followed by LPSVM (accuracy = 97.3282). However, in the validation phase, the overall accuracies of LPSVM achieved 97.1429 %, which was superior to LSVM (95.4286 %), SSVM (96.5714 %), PSVM (96 %), NSVM (96.5714 %), and St-SVM (94.86 %). Value of ROC and MCC for LPSVM achieved 0.9938 and 0.9369, respectively, which outperformed other classifiers. The results strongly suggest that LPSVM can aid in the diagnosis of breast cancer.",
"title": ""
},
{
"docid": "f071a3d699ba4b3452043b6efb14b508",
"text": "BACKGROUND\nThe medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note.\n\n\nMETHODS\nWe constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets.\n\n\nRESULTS\nThe convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied.\n\n\nCONCLUSION\nOur study shows that a supervised learning-based NLP approach is useful to develop medical subdomain classifiers. The deep learning algorithm with distributed word representation yields better performance yet shallow learning algorithms with the word and concept representation achieves comparable performance with better clinical interpretability. Portable classifiers may also be used across datasets from different institutions.",
"title": ""
},
{
"docid": "58cc081ac8e75c77de192f473e1cc10d",
"text": "We present an efficient approach for end-to-end out-of-core construction and interactive inspection of very large arbitrary surface models. The method tightly integrates visibility culling and out-of-core data management with a level-of-detail framework. At preprocessing time, we generate a coarse volume hierarchy by binary space partitioning the input triangle soup. Leaf nodes partition the original data into chunks of a fixed maximum number of triangles, while inner nodes are discretized into a fixed number of cubical voxels. Each voxel contains a compact direction dependent approximation of the appearance of the associated volumetric subpart of the model when viewed from a distance. The approximation is constructed by a visibility aware algorithm that fits parametric shaders to samples obtained by casting rays against the full resolution dataset. At rendering time, the volumetric structure, maintained off-core, is refined and rendered in front-to-back order, exploiting vertex programs for GPU evaluation of view-dependent voxel representations, hardware occlusion queries for culling occluded subtrees, and asynchronous I/O for detecting and avoiding data access latencies. Since the granularity of the multiresolution structure is coarse, data management, traversal and occlusion culling cost is amortized over many graphics primitives. The efficiency and generality of the approach is demonstrated with the interactive rendering of extremely complex heterogeneous surface models on current commodity graphics platforms.",
"title": ""
},
{
"docid": "dd05084594640b9ab87c702059f7a366",
"text": "Researchers and theorists have proposed that feelings of attachment to subgroups within a larger online community or site can increase users' loyalty to the site. They have identified two types of attachment, with distinct causes and consequences. With bond-based attachment, people feel connections to other group members, while with identity-based attachment they feel connections to the group as a whole. In two experiments we show that these feelings of attachment to subgroups increase loyalty to the larger community. Communication with other people in a subgroup but not simple awareness of them increases attachment to the larger community. By varying how the communication is structured, between dyads or with all group members simultaneously, the experiments show that bond- and identity-based attachment have different causes. But the experiments show no evidence that bond and identity attachment have different consequences. We consider both theoretical and methodological reasons why the consequences of bond-based and identity-based attachment are so similar.",
"title": ""
},
{
"docid": "549c8d2033f84890c91966630246e06e",
"text": "Propagation models are used to abstract the actual propagation characteristics of electromagnetic waves utilized for conveying information in a compact form (i.e., a model with a small number of parameters). The correct modeling of propagation and path loss is of paramount importance in wireless sensor network (WSN) system design and analysis [1]. Most of the important performance metrics commonly employed for WSNs, such as energy dissipation, route optimization, reliability, and connectivity, are affected by the utilized propagation model. However, in many studies on WSNs, overly simplistic and unrealistic propagation models are used. One of the reasons for the utilization of such impractical propagation models is the lack of awareness of experimentally available WSN-specific propagation and path-loss models. In this article, necessary succint background information is given on general wireless propagation modeling, and salient WSN-specific constraints on path-loss modeling are summarized. Building upon the provided background, an overview of the experimentally verified propagation models for WSNs is presented, and quantitative comparisons of propagation models employed in WSN research under various scenarios and frequency bands are provided.",
"title": ""
},
{
"docid": "b374975ae9690f96ed750a888713dbc9",
"text": "We present a method for densely computing local spherical histograms of oriented gradients (SHOG) in volumetric images. The descriptors are based on the continuous representation of the orientation histograms in the harmonic domain, which we compute very efficiently via spherical tensor products and the fast Fourier transformation. Building upon these local spherical histogram representations, we utilize the Harmonic Filter to create a generic rotation invariant object detection system that benefits from both the highly discriminative representation of local image patches in terms of histograms of oriented gradients and an adaptable trainable voting scheme that forms the filter. We exemplarily demonstrate the effectiveness of such dense spherical 3D descriptors in a detection task on biological 3D images. In a direct comparison to existing approaches, our new filter reveals superior performance.",
"title": ""
},
{
"docid": "90e5eaa383c00a0551a5161f07c683e7",
"text": "The importance of the Translation Lookaside Buffer (TLB) on system performance is well known. There have been numerous prior efforts addressing TLB design issues for cutting down access times and lowering miss rates. However, it was only recently that the first exploration [26] on prefetching TLB entries ahead of their need was undertaken and a mechanism called Recency Prefetching was proposed. There is a large body of literature on prefetching for caches, and it is not clear how they can be adapted (or if the issues are different) for TLBs, how well suited they are for TLB prefetching, and how they compare with the recency prefetching mechanism.This paper presents the first detailed comparison of different prefetching mechanisms (previously proposed for caches) - arbitrary stride prefetching, and markov prefetching - for TLB entries, and evaluates their pros and cons. In addition, this paper proposes a novel prefetching mechanism, called Distance Prefetching, that attempts to capture patterns in the reference behavior in a smaller space than earlier proposals. Using detailed simulations of a wide variety of applications (56 in all) from different benchmark suites and all the SPEC CPU2000 applications, this paper demonstrates the benefits of distance prefetching.",
"title": ""
},
{
"docid": "42e2aec24a5ab097b5fff3ec2fe0385d",
"text": "Online freelancing marketplaces have grown quickly in recent years. In theory, these sites offer workers the ability to earn money without the obligations and potential social biases associated with traditional employment frameworks. In this paper, we study whether two prominent online freelance marketplaces - TaskRabbit and Fiverr - are impacted by racial and gender bias. From these two platforms, we collect 13,500 worker profiles and gather information about workers' gender, race, customer reviews, ratings, and positions in search rankings. In both marketplaces, we find evidence of bias: we find that gender and race are significantly correlated with worker evaluations, which could harm the employment opportunities afforded to the workers. We hope that our study fuels more research on the presence and implications of discrimination in online environments.",
"title": ""
},
{
"docid": "8608ccbb61cbfbf3aae7e832ad4be0aa",
"text": "Part A: Fundamentals and Cryptography Chapter 1: A Framework for System Security Chapter 1 aims to describe a conceptual framework for the design and analysis of secure systems with the goal of defining a common language to express “concepts”. Since it is designed both for theoreticians and for practitioners, there are two kinds of applicability. On the one hand a meta-model is proposed to theoreticians, enabling them to express arbitrary axioms of other security models in this special framework. On the other hand the framework provides a language for describing the requirements, designs, and evaluations of secure systems. This information is given to the reader in the introduction and as a consequence he wants to get the specification of the framework. Unfortunately, the framework itself is not described! However, the contents cover first some surrounding concepts like “systems, owners, security and functionality”. These are described sometimes in a confusing way, so that it remains unclear, what the author really wants to focus on. The following comparison of “Qualitative and Quantitative Security” is done 1For example: if the reader is told, that “every system has an owner, and every owner is a system”, there obviously seems to be no difference between these entities (cp. p. 4).",
"title": ""
},
{
"docid": "47a8f987548d6fc03191844e392d9d05",
"text": "A major challenge in collaborative filtering based recommender systems is how to provide recommendations when rating data is sparse or entirely missing for a subset of users or items, commonly known as the cold-start problem. In recent years, there has been considerable interest in developing new solutions that address the cold-start problem. These solutions are mainly based on the idea of exploiting other sources of information to compensate for the lack of rating data. In this paper, we propose a novel algorithmic framework based on matrix factorization that simultaneously exploits the similarity information among users and items to alleviate the cold-start problem. In contrast to existing methods, the proposed algorithm decouples the following two aspects of the cold-start problem: (a) the completion of a rating sub-matrix, which is generated by excluding cold-start users and items from the original rating matrix; and (b) the transduction of knowledge from existing ratings to cold-start items/users using side information. This crucial difference significantly boosts the performance when appropriate side information is incorporated. We provide theoretical guarantees on the estimation error of the proposed two-stage algorithm based on the richness of similarity information in capturing the rating data. To the best of our knowledge, this is the first algorithm that addresses the cold-start problem with provable guarantees. We also conduct thorough experiments on synthetic and real datasets that demonstrate the effectiveness of the proposed algorithm and highlights the usefulness of auxiliary information in dealing with both cold-start users and items.",
"title": ""
},
{
"docid": "28b1cc95aa385664cacbf20661f5cf56",
"text": "Many organizations now emphasize the use of technology that can help them get closer to consumers and build ongoing relationships with them. The ability to compile consumer data profiles has been made even easier with Internet technology. However, it is often assumed that consumers like to believe they can trust a company with their personal details. Lack of trust may cause consumers to have privacy concerns. Addressing such privacy concerns may therefore be crucial to creating stable and ultimately profitable customer relationships. Three specific privacy concerns that have been frequently identified as being of importance to consumers include unauthorized secondary use of data, invasion of privacy, and errors. Results of a survey study indicate that both errors and invasion of privacy have a significant inverse relationship with online purchase behavior. Unauthorized use of secondary data appears to have little impact. Managerial implications include the careful selection of communication channels for maximum impact, the maintenance of discrete “permission-based” contact with consumers, and accurate recording and handling of data.",
"title": ""
},
{
"docid": "0c477aa54f5da088613d1376174feca8",
"text": "In today’s online social networks, it becomes essential to help newcomers as well as existing community members to find new social contacts. In scientific literature, this recommendation task is known as link prediction. Link prediction has important practical applications in social network platforms. It allows social network platform providers to recommend friends to their users. Another application is to infer missing links in partially observed networks. The shortcoming of many of the existing link prediction methods is that they mostly focus on undirected graphs only. This work closes this gap and introduces link prediction methods and metrics for directed graphs. Here, we compare well-known similarity metrics and their suitability for link prediction in directed social networks. We advance existing techniques and propose mining of subgraph patterns that are used to predict links in networks such as GitHub, GooglePlus, and Twitter. Our results show that the proposed metrics and techniques yield more accurate predictions when compared with metrics not accounting for the directed nature of the underlying networks.",
"title": ""
},
{
"docid": "0cf81998c0720405e2197c62afa08ee7",
"text": "User-generated online reviews can play a significant role in the success of retail products, hotels, restaurants, etc. However, review systems are often targeted by opinion spammers who seek to distort the perceived quality of a product by creating fraudulent reviews. We propose a fast and effective framework, FRAUDEAGLE, for spotting fraudsters and fake reviews in online review datasets. Our method has several advantages: (1) it exploits the network effect among reviewers and products, unlike the vast majority of existing methods that focus on review text or behavioral analysis, (2) it consists of two complementary steps; scoring users and reviews for fraud detection, and grouping for visualization and sensemaking, (3) it operates in a completely unsupervised fashion requiring no labeled data, while still incorporating side information if available, and (4) it is scalable to large datasets as its run time grows linearly with network size. We demonstrate the effectiveness of our framework on synthetic and real datasets; where FRAUDEAGLE successfully reveals fraud-bots in a large online app review database. Introduction The Web has greatly enhanced the way people perform certain activities (e.g. shopping), find information, and interact with others. Today many people read/write reviews on merchant sites, blogs, forums, and social media before/after they purchase products or services. Examples include restaurant reviews on Yelp, product reviews on Amazon, hotel reviews on TripAdvisor, and many others. Such user-generated content contains rich information about user experiences and opinions, which allow future potential customers to make better decisions about spending their money, and also help merchants improve their products, services, and marketing. Since online reviews can directly influence customer purchase decisions, they are crucial to the success of businesses. While positive reviews with high ratings can yield financial gains, negative reviews can damage reputation and cause monetary loss. This effect is magnified as the information spreads through the Web (Hitlin 2003; Mendoza, Poblete, and Castillo 2010). As a result, online review systems are attractive targets for opinion fraud. Opinion fraud involves reviewers (often paid) writing bogus reviews (Kost May 2012; Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Streitfeld August 2011). These spam reviews come in two flavors: defaming-spam which untruthfully vilifies, or hypespam that deceitfully promotes the target product. The opinion fraud detection problem is to spot the fake reviews in online sites, given all the reviews on the site, and for each review, its text, its author, the product it was written for, timestamp of posting, and its star-rating. Typically no user profile information is available (or is self-declared and cannot be trusted), while more side information for products (e.g. price, brand), and for reviews (e.g. number of (helpful) feedbacks) could be available depending on the site. Detecting opinion fraud, as defined above, is a non-trivial and challenging problem. Fake reviews are often written by experienced professionals who are paid to write high quality, believable reviews. As a result, it is difficult for an average potential customer to differentiate bogus reviews from truthful ones, just by looking at individual reviews text(Ott et al. 2011). As such, manual labeling of reviews is hard and ground truth information is often unavailable, which makes training supervised models less attractive for this problem. Summary of previous work. Previous attempts at solving the problem use several heuristics, such as duplicated reviews (Jindal and Liu 2008), or acquire bogus reviews from non-experts (Ott et al. 2011), to generate pseudo-ground truth, or a reference dataset. This data is then used for learning classification models together with carefully engineered features. One downside of such techniques is that they do not generalize: one needs to collect new data and train a new model for review data from a different domain, e.g., hotel vs. restaurant reviews. Moreover feature selection becomes a tedious sub-problem, as datasets from different domains might exhibit different characteristics. Other feature-based proposals include (Lim et al. 2010; Mukherjee, Liu, and Glance 2012). A large body of work on fraud detection relies on review text information (Jindal and Liu 2008; Ott et al. 2011; Feng, Banerjee, and Choi 2012) or behavioral evidence (Lim et al. 2010; Xie et al. 2012; Feng et al. 2012), and ignore the connectivity structure of review data. On the other hand, the network of reviewers and products contains rich information that implicitly represents correlations among these entities. The review network is also invaluable for detecting teams of fraudsters that operate collaboratively on targeted products. Our contributions. In this work we propose an unsuperProceedings of the Seventh International AAAI Conference on Weblogs and Social Media",
"title": ""
},
{
"docid": "049c1597f063f9c5fcc098cab8885289",
"text": "When one captures images in low-light conditions, the images often suffer from low visibility. This poor quality may significantly degrade the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a very simple and effective method, named as LIME, to enhance low-light images. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G and B channels. Further, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging real-world low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts.",
"title": ""
}
] |
scidocsrr
|
f941d8a1f1a4494bea04f65a8a386b59
|
On the Depth of Deep Neural Networks: A Theoretical View
|
[
{
"docid": "db433a01dd2a2fd80580ffac05601f70",
"text": "While depth tends to improve network performances, it also m akes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed a t obtaining small and fast-to-execute models, and it has shown that a student netw ork could imitate the soft output of a larger teacher network or ensemble of networ ks. In this paper, we extend this idea to allow the training of a student that is d eeper and thinner than the teacher, using not only the outputs but also the inte rmediate representations learned by the teacher as hints to improve the traini ng process and final performance of the student. Because the student intermedia te hidden layer will generally be smaller than the teacher’s intermediate hidde n layer, additional parameters are introduced to map the student hidden layer to th e prediction of the teacher hidden layer. This allows one to train deeper studen s that can generalize better or run faster, a trade-off that is controlled by the ch osen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teache r network.",
"title": ""
},
{
"docid": "6efdf43a454ce7da51927c07f1449695",
"text": "We investigate efficient representations of functions that can be written as outputs of so-called sum-product networks, that alternate layers of product and sum operations (see Fig 1 for a simple sum-product network). We find that there exist families of such functions that can be represented much more efficiently by deep sum-product networks (i.e. allowing multiple hidden layers), compared to shallow sum-product networks (constrained to using a single hidden layer). For instance, there is a family of functions fn where n is the number of input variables, such that fn can be computed with a deep sum-product network of log 2 n layers and n−1 units, while a shallow sum-product network (two layers) requires 2 √ n−1 units. These mathematical results are in the same spirit as those by H̊astad and Goldmann (1991) on the limitations of small depth computational circuits. They motivate using deep networks to be able to model complex functions more efficiently than with shallow networks. Exponential gains in terms of the number of parameters are quite significant in the context of statistical machine learning. Indeed, the number of training samples required to optimize a model’s parameters without suffering from overfitting typically increases with the number of parameters. Deep networks thus offer a promising way to learn complex functions from limited data, even though parameter optimization may still be challenging.",
"title": ""
}
] |
[
{
"docid": "07be6a2df7360ef53d7e6d9cc30f621d",
"text": "Fire accidents can cause numerous casualties and heavy property losses, especially, in petrochemical industry, such accidents are likely to cause secondary disasters. However, common fire drill training would cause loss of resources and pollution. We designed a multi-dimensional interactive somatosensory (MDIS) cloth system based on virtual reality technology to simulate fire accidents in petrochemical industry. It provides a vivid visual and somatosensory experience. A thermal radiation model is built in a virtual environment, and it could predict the destruction radius of a fire. The participant position changes are got from Kinect, and shown in virtual environment synchronously. The somatosensory cloth, which could both heat and refrigerant, provides temperature feedback based on thermal radiation results and actual distance. In this paper, we demonstrate the details of the design, and then verified its basic function. Heating deviation from model target is lower than 3.3 °C and refrigerant efficiency is approximately two times faster than heating efficiency.",
"title": ""
},
{
"docid": "d11c2dd512f680e79706f73d4cd3d0aa",
"text": "We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented in terms of a lowrank matrix, and the rank constraint can be relaxed so as to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layerwise manner. Empirically, we find that CCNNs achieve competitive or better performance than CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.",
"title": ""
},
{
"docid": "a73167a43aec68b59968a014e553bf8d",
"text": "Between the late 1960s and the beginning of the 1980s, the wide recognition that simple dynamical laws could give rise to complex behaviors was sometimes hailed as a true scientific revolution impacting several disciplines, for which a striking label was coined—“chaos.” Mathematicians quickly pointed out that the purported revolution was relying on the abstract theory of dynamical systems founded in the late 19th century by Henri Poincaré who had already reached a similar conclusion. In this paper, we flesh out the historiographical tensions arising from these confrontations: longue-durée history and revolution; abstract mathematics and the use of mathematical techniques in various other domains. After reviewing the historiography of dynamical systems theory from Poincaré to the 1960s, we highlight the pioneering work of a few individuals (Steve Smale, Edward Lorenz, David Ruelle). We then go on to discuss the nature of the chaos phenomenon, which, we argue, was a conceptual reconfiguration as much as a sociodisciplinary convergence. C © 2002 Elsevier Science (USA)",
"title": ""
},
{
"docid": "3039e9b5271445addc3e824c56f89490",
"text": "From the recent availability of images recorded by synthetic aperture radar (SAR) airborne systems, automatic results of digital elevation models (DEMs) on urban structures have been published lately. This paper deals with automatic extraction of three-dimensional (3-D) buildings from stereoscopic high-resolution images recorded by the SAR airborne RAMSES sensor from the French Aerospace Research Center (ONERA). On these images, roofs are not very textured whereas typical strong L-shaped echoes are visible. These returns generally result from dihedral corners between ground and structures. They provide a part of the building footprints and the ground altitude, but not the building heights. Thus, we present an adapted processing scheme in two steps. First is stereoscopic structure extraction from L-shaped echoes. Buildings are detected on each image using the Hough transform. Then they are recognized during a stereoscopic refinement stage based on a criterion optimization. Second, is height measurement. As most of previous extracted footprints indicate the ground altitude, building heights are found by monoscopic and stereoscopic measures. Between structures, ground altitudes are obtained by a dense matching process. Experiments are performed on images representing an industrial area. Results are compared with a ground truth. Advantages and limitations of the method are brought out.",
"title": ""
},
{
"docid": "4474774add1e75c5476c84bde86f7560",
"text": "This paper shows design and implementation of data warehouse as well as the use of data mining algorithms for the purpose of knowledge discovery as the basic resource of adequate business decision making process. The project is realized for the needs of Student's Service Department of the Faculty of Organizational Sciences (FOS), University of Belgrade, Serbia and Montenegro. This system represents a good base for analysis and predictions in the following time period for the purpose of quality business decision-making by top management. Thus, the first part of the paper shows the steps in designing and development of data warehouse of the mentioned business system. The second part of the paper shows the implementation of data mining algorithms for the purpose of deducting rules, patterns and knowledge as a resource for support in the process of decision making.",
"title": ""
},
{
"docid": "cf8dfff6a026fc3bb4248cd813af9947",
"text": "We consider a multi agent optimization problem where a network of agents collectively solves a global optimization problem with the objective function given by the sum of locally known convex functions. We propose a fully distributed broadcast-based Alternating Direction Method of Multipliers (ADMM), in which each agent broadcasts the outcome of his local processing to all his neighbors. We show that both the objective function values and the feasibility violation converge with rate O(1/T), where T is the number of iterations. This improves upon the O(1/√T) convergence rate of subgradient-based methods. We also characterize the effect of network structure and the choice of communication matrix on the convergence speed. Because of its broadcast nature, the storage requirements of our algorithm are much more modest compared to the distributed algorithms that use pairwise communication between agents.",
"title": ""
},
{
"docid": "23a9e62d26a54c321e67cbb8bdea2b16",
"text": "An autonomous vehicle following system including control approaches is presented in this paper. An existing robotic driver is used to control a standard passenger vehicle such that no modifications to the car are necessary. Only information about the relative position of the lead vehicle and the motion of the following vehicle is required, and methods are presented to construct a reference trajectory to enable accurate following. A laser scanner is used to detect the lead vehicle and the following vehicle’s ego-motion is sensed using an IMU and wheel encoder. An algorithm was developed and tested to locate the lead vehicle with RMS position and orientation errors of 65mm and 5.8◦ respectively. Several trajectory-based lateral controllers were tested in simulation and then experimentally, with the best controller having an RMS lateral deviation of 37cm from the path of the lead vehicle. A new trajectorybased spacing controller was tested in simulation which allows the following vehicle to reproduce the velocity of the lead vehicle.",
"title": ""
},
{
"docid": "50c931cc73cbb3336d24707dcb5e938a",
"text": "Endochondral ossification, the mechanism responsible for the development of the long bones, is dependent on an extremely stringent coordination between the processes of chondrocyte maturation in the growth plate, vascular expansion in the surrounding tissues, and osteoblast differentiation and osteogenesis in the perichondrium and the developing bone center. The synchronization of these processes occurring in adjacent tissues is regulated through vigorous crosstalk between chondrocytes, endothelial cells and osteoblast lineage cells. Our knowledge about the molecular constituents of these bidirectional communications is undoubtedly incomplete, but certainly some signaling pathways effective in cartilage have been recognized to play key roles in steering vascularization and osteogenesis in the perichondrial tissues. These include hypoxia-driven signaling pathways, governed by the hypoxia-inducible factors (HIFs) and vascular endothelial growth factor (VEGF), which are absolutely essential for the survival and functioning of chondrocytes in the avascular growth plate, at least in part by regulating the oxygenation of developing cartilage through the stimulation of angiogenesis in the surrounding tissues. A second coordinating signal emanating from cartilage and regulating developmental processes in the adjacent perichondrium is Indian Hedgehog (IHH). IHH, produced by pre-hypertrophic and early hypertrophic chondrocytes in the growth plate, induces the differentiation of adjacent perichondrial progenitor cells into osteoblasts, thereby harmonizing the site and time of bone formation with the developmental progression of chondrogenesis. Both signaling pathways represent vital mediators of the tightly organized conversion of avascular cartilage into vascularized and mineralized bone during endochondral ossification.",
"title": ""
},
{
"docid": "1cdee228f9813e4f33df1706ec4e7876",
"text": "Existing methods on sketch based image retrieval (SBIR) are usually based on the hand-crafted features whose ability of representation is limited. In this paper, we propose a sketch based image retrieval method via image-aided cross domain learning. First, the deep learning model is introduced to learn the discriminative features. However, it needs a large number of images to train the deep model, which is not suitable for the sketch images. Thus, we propose to extend the sketch training images via introducing the real images. Specifically, we initialize the deep models with extra image data, and then extract the generalized boundary from real images as the sketch approximation. The using of generalized boundary is under the assumption that their domain is similar with sketch domain. Finally, the neural network is fine-tuned with the sketch approximation data. Experimental results on Flicker15 show that the proposed method has a strong ability to link the associated image-sketch pairs and the results outperform state-of-the-arts methods.",
"title": ""
},
{
"docid": "114d741eb174074d7e0b3797cf3bd3b9",
"text": "System-level simulations have become an indispensable tool for predicting the behavior of wireless cellular systems. As exact link-level modeling is unfeasible due to its huge complexity, mathematical abstraction is required to obtain equivalent results by less complexity. A particular problem in such approaches is the modeling of multiple coherent transmissions. Those arise in multiple-input-multiple-output transmissions at every base station but nowadays so-called coordinated multipoint (CoMP) techniques have become very popular, allowing to allocate two or more spatially separated transmission points. Also, multimedia broadcast single frequency networks (MBSFNs) have been introduced recently in long-term evolution (LTE), which enables efficient broadcasting transmission suitable for spreading information that has a high user demand as well as simultaneously sending updates to a large number of devices. This paper introduces the concept of runtime-precoding, which allows to accurately abstract many coherent transmission schemes while keeping additional complexity at a minimum. We explain its implementation and advantages. For validation, we incorporate the runtime-precoding functionality into the Vienna LTE-A downlink system-level simulator, which is an open source tool, freely available under an academic noncommercial use license. We measure simulation run times and compare them against the legacy approach as well as link-level simulations. Furthermore, we present multiple application examples in the context of intrasite and intersite CoMP for train communications and MBSFN.",
"title": ""
},
{
"docid": "0907539385c59f9bd476b2d1fb723a38",
"text": "We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.",
"title": ""
},
{
"docid": "6379d5330037a774f9ceed4c51bda1f6",
"text": "Despite long-standing observations on diverse cytokinin actions, the discovery path to cytokinin signaling mechanisms was tortuous. Unyielding to conventional genetic screens, experimental innovations were paramount in unraveling the core cytokinin signaling circuitry, which employs a large repertoire of genes with overlapping and specific functions. The canonical two-component transcription circuitry involves His kinases that perceive cytokinin and initiate signaling, as well as His-to-Asp phosphorelay proteins that transfer phosphoryl groups to response regulators, transcriptional activators, or repressors. Recent advances have revealed the complex physiological functions of cytokinins, including interactions with auxin and other signal transduction pathways. This review begins by outlining the historical path to cytokinin discovery and then elucidates the diverse cytokinin functions and key signaling components. Highlights focus on the integration of cytokinin signaling components into regulatory networks in specific contexts, ranging from molecular, cellular, and developmental regulations in the embryo, root apical meristem, shoot apical meristem, stem and root vasculature, and nodule organogenesis to organismal responses underlying immunity, stress tolerance, and senescence.",
"title": ""
},
{
"docid": "f45e076e1a465593da4dd5befb81a650",
"text": "Animacy, word length, and prosody have all been accorded prominent roles in explanations for word order variations in language use. We examined the sequencing effects of these factors in two types of tasks. In recall tasks designed to simulate language production, we found selective effects of animacy. Animate nouns tended to appear as subjects in transitive sentences, but showed no special affinity for initial position in conjunctions within sentences, but showed no special affinity for initial position in conjunctions within sentences, suggesting a stronger involvement of animacy in grammatical role assignment than in word ordering. Word length had no significant impact: Shorter words did not appear earlier than longer words within sentences or within isolated conjunctions of nouns. Prosody had a weak effect on word order in isolated conjunctions, favoring sequences with alternating rhythm, but only in the absence of an animacy contrast. These results tend to confirm a hypothesized role for conceptual (meaning-based) accessibility in grammatical role assignment and to disconfirm a hypothesized role for lexical (form-based) accessibility in word ordering. In a judgment task, forms with animate nouns early were preferred across all constructions, and forms with short words early were often preferred both in sentences and in conjunctions. The findings suggest a possible asymmetry between comprehension and production in functional accounts of word order variations.",
"title": ""
},
{
"docid": "1172addf46ec4c70ec658ab0c0a17902",
"text": "This paper extends research on ethical leadership by proposing a responsibility orientation for leaders. Responsible leadership is based on the concept of leaders who are not isolated from the environment, who critically evaluate prevailing norms, are forward-looking, share responsibility, and aim to solve problems collectively. Adding such a responsibility orientation helps to address critical issues that persist in research on ethical leadership. The paper discusses important aspects of responsible leadership, which include being able to make informed ethical judgments about prevailing norms and rules, communicating effectively with stakeholders, engaging in long-term thinking and in perspective-taking, displaying moral courage, and aspiring to positive change. Furthermore, responsible leadership means actively engaging stakeholders, encouraging participative decision-making, and aiming for shared problem-solving. A case study that draws on in-depth interviews with the representatives of businesses and non-governmental organizations illustrates the practical relevance of thinking about responsibility and reveals the challenges of responsible leadership.",
"title": ""
},
{
"docid": "035feb63adbe5f83b691e8baf89629cc",
"text": "In this article we study the problem of document image representation based on visual features. We propose a comprehensive experimental study that compares three types of visual document image representations: (1) traditional so-called shallow features, such as the RunLength and the Fisher-Vector descriptors, (2) deep features based on Convolutional Neural Networks, and (3) features extracted from hybrid architectures that take inspiration from the two previous ones. We evaluate these features in several tasks ( i.e. classification, clustering, and retrieval) and in different setups ( e.g. domain transfer) using several public and in-house datasets. Our results show that deep features generally outperform other types of features when there is no domain shift and the new task is closely related to the one used to train the model. However, when a large domain or task shift is present, the Fisher-Vector shallow features generalize better and often obtain the best results.",
"title": ""
},
{
"docid": "5fe7cf9d742f79263e804f164b48d208",
"text": "In this paper we consider the cognitive radio system based on spectrum sensing, and propose an error correction technique for its performance improvement. We analyze secondary user link based on Orthogonal Frequency-Division Multiplexing (OFDM), realized by using Universal Software Radio Peripheral N210 platforms. Parameters of low density parity check codes and interleaver that provide significant performance improvement for the acceptable decoding latency are identified. The experimental results will be compared with the Monte Carlo simulation results obtained by using the simplified channel models.",
"title": ""
},
{
"docid": "648a5479933eb4703f1d2639e0c3b5c7",
"text": "The Surgery Treatment Modality Committee of the Korean Gynecologic Oncologic Group (KGOG) has determined to develop a surgical manual to facilitate clinical trials and to improve communication between investigators by standardizing and precisely describing operating procedures. The literature on anatomic terminology, identification of surgical components, and surgical techniques were reviewed and discussed in depth to develop a surgical manual for gynecologic oncology. The surgical procedures provided here represent the minimum requirements for participating in a clinical trial. These procedures should be described in the operation record form, and the pathologic findings obtained from the procedures should be recorded in the pathologic report form. Here, we focused on radical hysterectomy and lymphadenectomy, and we developed a KGOG classification for those conditions.",
"title": ""
},
{
"docid": "24b4fb65f772f4d1aa7f846edc8d860f",
"text": "Robust object recognition systems usually rely on powerful feature extraction mechanisms from a large number of real images. However, in many realistic applications, collecting sufficient images for ever-growing new classes is unattainable. In this paper, we propose a new Zero-shot learning (ZSL) framework that can synthesise visual features for unseen classes without acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS) algorithm, semantic attributes are effectively utilised as an intermediate clue to synthesise unseen visual features at the training stage. Hereafter, ZSL recognition is converted into the conventional supervised problem, i.e. the synthesised visual features can be straightforwardly fed to typical classifiers such as SVM. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data. Extensive experimental results manifest that our proposed approach significantly improve the state-of-the-art results.",
"title": ""
},
{
"docid": "4e621b825deb27115cc9b98bec849b34",
"text": "veryone who ever taught project management seems to have a favorite disaster story, whether it's the new Denver airport baggage handling system, the London Stock Exchange, or the French Railways. Many of us would point to deficiencies in the software engineering activities that seemingly guarantee failure. It is indeed important that we understand how to engineer systems well, but we must also consider a wider viewpoint: success requires much more than good engineering. We must understand why we are engineering anything at all, and what the investment in money, time, and energy is all about. Who wants the software? In what context will it be applied? Who is paying for it and what do they hope to get from it? This special focus of IEEE Software takes that wider viewpoint and examines different views of how we can achieve success in software projects. The arguments and projects described here are not solely concerned with software but with the otherdeliverables that typically make for a required outcome: fully trained people; a new operational model for a business; the realization of an organizational strategy that depends on new information systems. What critical factors played a role in your last project success? Join us on this intellectual journey from soft ware engineering to business strategy.",
"title": ""
},
{
"docid": "2b8ca8be8d5e468d4cd285ecc726eceb",
"text": "These days, large-scale graph processing becomes more and more important. Pregel, inspired by Bulk Synchronous Parallel, is one of the highly used systems to process large-scale graph problems. In Pregel, each vertex executes a function and waits for a superstep to communicate its data to other vertices. Superstep is a very time-consuming operation, used by Pregel, to synchronize distributed computations in a cluster of computers. However, it may become a bottleneck when the number of communications increases in a graph with million vertices. Superstep works like a barrier in Pregel that increases the side effect of skew problem in distributed computing environment. ExPregel is a Pregel-like model that is designed to reduce the number of communication messages between two vertices resided on two different computational nodes. We have proven that ExPregel reduces the number of exchanged messages as well as the number of supersteps for all graph topologies. Enhancing parallelism in our new computational model is another important feature that manifolds the speed of graph analysis programs. More interestingly, ExPregel uses the same model of programming as Pregel. Our experiments on large-scale real-world graphs show that ExPregel can reduce network traffic as well as number of supersteps from 45% to 96%. Runtime speed up in the proposed model varies from 1.2× to 30×. Copyright © 2015 John Wiley & Sons, Ltd.",
"title": ""
}
] |
scidocsrr
|
5c953b150016a442810c30ba1c79f65a
|
Image Segmentation by Probabilistic Bottom-Up Aggregation and Cue Integration
|
[
{
"docid": "1589e72380265787a10288c5ad906670",
"text": "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.",
"title": ""
},
{
"docid": "e364a2ac82f42c87f88b6ed508dc0d8e",
"text": "In order to work well, many computer vision algorithms require that their parameters be adjusted according to the image noise level, making it an important quantity to estimate. We show how to estimate an upper bound on the noise level from a single image based on a piecewise smooth image prior model and measured CCD camera response functions. We also learn the space of noise level functions how noise level changes with respect to brightness and use Bayesian MAP inference to infer the noise level function from a single image. We illustrate the utility of this noise estimation for two algorithms: edge detection and featurepreserving smoothing through bilateral filtering. For a variety of different noise levels, we obtain good results for both these algorithms with no user-specified inputs.",
"title": ""
},
{
"docid": "8ca30cd6fd335024690837c137f0d1af",
"text": "Non-negative matrix factorization (NMF) is a recently deve loped technique for finding parts-based, linear representations of non-negative data. Although it h as successfully been applied in several applications, it does not always result in parts-based repr esentations. In this paper, we show how explicitly incorporating the notion of ‘sparseness’ impro ves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF a nd for our extension. Our hope is that this will further the application of these methods to olving novel data-analysis problems.",
"title": ""
}
] |
[
{
"docid": "f370a8ff8722d341d6e839ec2c7217c1",
"text": "We give the first O(mpolylog(n)) time algorithms for approximating maximum flows in undirected graphs and constructing polylog(n)-quality cut-approximating hierarchical tree decompositions. Our algorithm invokes existing algorithms for these two problems recursively while gradually incorporating size reductions. These size reductions are in turn obtained via ultra-sparsifiers, which are key tools in solvers for symmetric diagonally dominant (SDD) linear systems.",
"title": ""
},
{
"docid": "99ddcb898895b04f4e86337fe35c1713",
"text": "Emerging self-driving vehicles are vulnerable to different attacks due to the principle and the type of communication systems that are used in these vehicles. These vehicles are increasingly relying on external communication via vehicular ad hoc networks (VANETs). VANETs add new threats to self-driving vehicles that contribute to substantial challenges in autonomous systems. These communication systems render self-driving vehicles vulnerable to many types of malicious attacks, such as Sybil attacks, Denial of Service (DoS), black hole, grey hole and wormhole attacks. In this paper, we propose an intelligent security system designed to secure external communications for self-driving and semi self-driving cars. The proposed scheme is based on Proportional Overlapping Score (POS) to decrease the number of features found in the Kyoto benchmark dataset. The hybrid detection system relies on the Back Propagation neural networks (BP), to detect a common type of attack in VANETs: Denial-of-Service (DoS). The experimental results show that the proposed BP-IDS is capable of identifying malicious vehicles in self-driving and semi self-driving vehicles.",
"title": ""
},
{
"docid": "496d0bfff9a88dd6c5c6641bad62c0cd",
"text": "Governments envisioning large-scale national egovernment policies increasingly draw on collaboration with private actors, yet the relationship between dynamics and outcomes of public-private partnership (PPP) is still unclear. The involvement of the banking sector in the emergence of a national electronic identification (e-ID) in Denmark is a case in point. Drawing on an analysis of primary and secondary data, we adopt the theoretical lens of collective action to investigate how transformations over time in the convergence of interests, the interdependence of resources, and the alignment of governance models between government and the banking sector shaped the emergence of the Danish national e-ID. We propose a process model to conceptualize paths towards the emergence of public-private collaboration for digital information infrastructure – a common good.",
"title": ""
},
{
"docid": "8ce498cdbdec9bda55970d39bd9d6bee",
"text": "This paper is about the good side of modal logic, the bad side of modal logic, and how hybrid logic takes the good and fixes the bad. In essence, modal logic is a simple formalism for working with relational structures (or multigraphs). But modal logic has no mechanism for referring to or reasoning about the individual nodes in such structures, and this lessens its effectiveness as a representation formalism. In their simplest form, hybrid logics are upgraded modal logics in which reference to individual nodes is possible. But hybrid logic is a rather unusual modal upgrade. It pushes one simple idea as far as it will go: represent all information as formulas. This turns out to be the key needed to draw together a surprisingly diverse range of work (for example, feature logic, description logic and labelled deduction). Moreover, it displays a number of knowledge representation issues in a new light, notably the importance of sorting.",
"title": ""
},
{
"docid": "03869f2ac07c13bbce6af743ea5d2551",
"text": "In this paper we present a novel vehicle detection method in traffic surveillance scenarios. This work is distinguished by three key contributions. First, a feature fusion backbone network is proposed to extract vehicle features which has the capability of modeling geometric transformations. Second, a vehicle proposal sub-network is applied to generate candidate vehicle proposals based on multi-level semantic feature maps. Finally, a head network is used to refine the categories and locations of these proposals. Benefits from the above cues, vehicles with large variation in occlusion and lighting conditions can be detected with high accuracy. Furthermore, the method also demonstrates robustness in the case of motion blur caused by rapid movement of vehicles. We test our network on DETRAC[21] benchmark detection challenge and it shows the state-of-theart performance. Specifically, the proposed method gets the best performances not only at 4 different level: overall, easy, medium and hard, but also in sunny, cloudy and night conditions.",
"title": ""
},
{
"docid": "49c7b5cab51301d8b921fa87d6c0b1ff",
"text": "We introduce the input output automa ton a simple but powerful model of computation in asynchronous distributed networks With this model we are able to construct modular hierarchical correct ness proofs for distributed algorithms We de ne this model and give an interesting example of how it can be used to construct such proofs",
"title": ""
},
{
"docid": "d4c7efe10b1444d0f9cb6032856ba4e1",
"text": "This article provides a brief overview of several classes of fiber reinforced cement based composites and suggests future directions in FRC development. Special focus is placed on micromechanics based design methodology of strain-hardening cement based composites. As example, a particular engineered cementitious composite newly developed at the ACE-MRL at the University of Michigan is described in detail with regard to its design, material composition, processing, and mechanical properties. Three potential applications which utilize the unique properties of such composites are cited in this paper, and future research needs are identified. * To appear in Fiber Reinforced Concrete: Present and the Future, Eds: N. Banthia, A. Bentur, and A. Mufti, Canadian Society of Civil Engineers, 1997.",
"title": ""
},
{
"docid": "ee80447709188fab5debfcf9b50a9dcb",
"text": "Prior research by Kornell and Bjork (2007) and Hartwig and Dunlosky (2012) has demonstrated that college students tend to employ study strategies that are far from optimal. We examined whether individuals in the broader—and typically older—population might hold different beliefs about how best to study and learn, given their more extensive experience outside of formal coursework and deadlines. Via a web-based survey, however, we found striking similarities: Learners’ study decisions tend to be driven by deadlines, and the benefits of activities such as self-testing and reviewing studied materials are elf-regulated learning etacognition indset tudy strategies mostly unappreciated. We also found evidence, however, that one’s mindset with respect to intelligence is related to one’s habits and beliefs: Individuals who believe that intelligence can be increased through effort were more likely to value the pedagogical benefits of self-testing, to restudy, and to be intrinsically motivated to learn, compared to individuals who believe that intelligence is fixed. © 2014 Society for Applied Research in Memory and Cognition. Published by Elsevier Inc. All rights With the world’s knowledge at our fingertips, there are increasng opportunities to learn on our own, not only during the years f formal education, but also across our lifespan as our careers, obbies, and interests change. The rapid pace of technological hange has also made such self-directed learning necessary: the bility to effectively self-regulate one’s learning—monitoring one’s wn learning and implementing beneficial study strategies—is, rguably, more important than ever before. Decades of research have revealed the efficacy of various study trategies (see Dunlosky, Rawson, Marsh, Nathan, & Willingham, 013, for a review of effective—and less effective—study techiques). Bjork (1994) coined the term, “desirable difficulties,” to efer to the set of study conditions or study strategies that appear to low down the acquisition of to-be-learned materials and make the earning process seem more effortful, but then enhance long-term etention and transfer, presumably because contending with those ifficulties engages processes that support learning and retention. xamples of desirable difficulties include generating information or esting oneself (instead of reading or re-reading information—a relPlease cite this article in press as: Yan, V. X., et al. Habits and beliefs Journal of Applied Research in Memory and Cognition (2014), http://dx.d tively passive activity), spacing out repeated study opportunities instead of cramming), and varying conditions of practice (rather han keeping those conditions constant and predictable). ∗ Corresponding author at: 1285 Franz Hall, Department of Psychology, University f California, Los Angeles, CA 90095, United States. Tel.: +1 310 954 6650. E-mail address: [email protected] (V.X. Yan). ttp://dx.doi.org/10.1016/j.jarmac.2014.04.003 211-3681/© 2014 Society for Applied Research in Memory and Cognition. Published by reserved. Many recent findings, however—both survey-based and experimental—have revealed that learners continue to study in non-optimal ways. Learners do not appear, for example, to understand two of the most robust effects from the cognitive psychology literature—namely, the testing effect (that practicing retrieval leads to better long-term retention, compared even to re-reading; e.g., Roediger & Karpicke, 2006a) and the spacing effect (that spacing repeated study sessions leads to better long-term retention than does massing repetitions; e.g., Cepeda, Pashler, Vul, Wixted, & Rohrer, 2006; Dempster, 1988). A survey of 472 undergraduate students by Kornell and Bjork (2007)—which was replicated by Hartwig and Dunlosky (2012)—showed that students underappreciate the learning benefits of testing. Similarly, Karpicke, Butler, and Roediger (2009) surveyed students’ study strategies and found that re-reading was by far the most popular study strategy and that self-testing tended to be used only to assess whether some level of learning had been achieved, not to enhance subsequent recall. Even when students have some appreciation of effective strategies they often do not implement those strategies. Susser and McCabe (2013), for example, showed that even though students reported understanding the benefits of spaced learning over massed learning, they often do not space their study sessions on a given topic, particularly if their upcoming test is going to have a that guide self-regulated learning: Do they vary with mindset? oi.org/10.1016/j.jarmac.2014.04.003 multiple-choice format, or if they think the material is relatively easy, or if they are simply too busy. In fact, Kornell and Bjork’s (2007) survey showed that students’ study decisions tended to be driven by impending deadlines, rather than by learning goals, Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "07817eb2722fb434b1b8565d936197cf",
"text": "We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.",
"title": ""
},
{
"docid": "bd3e5a403cc42952932a7efbd0d57719",
"text": "The acoustic echo cancellation system is very important in the communication applications that are used these days; in view of this importance we have implemented this system practically by using DSP TMS320C6713 Starter Kit (DSK). The acoustic echo cancellation system was implemented based on 8 subbands techniques using Least Mean Square (LMS) algorithm and Normalized Least Mean Square (NLMS) algorithm. The system was evaluated by measuring the performance according to Echo Return Loss Enhancement (ERLE) factor and Mean Square Error (MSE) factor. Keywords—Acoustic echo canceller; Least Mean Square (LMS); Normalized Least Mean Square (NLMS); TMS320C6713; 8 subbands adaptive filter",
"title": ""
},
{
"docid": "61d31ebda0f9c330e5d86639e0bd824e",
"text": "An electric vehicle (EV) aggregation agent, as a commercial middleman between electricity market and EV owners, participates with bids for purchasing electrical energy and selling secondary reserve. This paper presents an optimization approach to support the aggregation agent participating in the day-ahead and secondary reserve sessions, and identifies the input variables that need to be forecasted or estimated. Results are presented for two years (2009 and 2010) of the Iberian market, and considering perfect and naïve forecast for all variables of the problem.",
"title": ""
},
{
"docid": "05b490844f02e0fefe018022c1032c1c",
"text": "This document describes how to use ms, a program to generate samples under a variety of neutral models. The purpose of this program is to allow one to investigate the statistical properties of such samples, to evaluate estimators or statistical tests, and generally to aid in the interpretation of polymorphism data sets. The typical data set is obtained in a resequencing study in which the same homologous segment is sequenced in several individuals sampled from a population. The classic example of such a data set is the Adh study of Kreitman(1983) in which 11 copies of the Adh gene of Drosophila melanogaster were sequenced. In this case, the copies were isolated from 11 different strains of D. melanogaster collected from scattered locations around the globe. The program ms can be used to generate many independent replicate samples under a variety of assumptions about migration, recombination rate and population size to aid in the interpretation of such polymorphism studies. The samples are generated using the now standard coalescent approach in which the random genealogy of the sample is first generated and then mutations are randomly place on the genealogy (Kingman, 1982; Hudson, 1990; Nordborg, 2001). The usual small sample approximations of the coalescent are used. An infinitesites model of mutation is assumed, and thus multiple-hits and back mutations do not occur. However, when used in conjunction with other programs, finitesite mutation models or micro-satellite models can be studied. For example, the gene trees themselves can be output, and these gene trees can be used as input to other programs which will evolve the sequences under a variety of finite-site models. These are described later. The program is intended to run on Unix, or Unix-like operating systems, such as Linux or MacOsX. The next section describes how to download and compile the program. The subsequent sections described how to run the program and in particular how to specify the parameter values for the simulations. If you use ms for published research, the appropriate citation is:",
"title": ""
},
{
"docid": "1470269bfde3dbbda63ae583bebdfe0f",
"text": "Acquiring local context information and sharing it among co-located devices is critical for emerging pervasive computing applications. The devices belonging to a group of co-located people may need to detect a shared activity (e.g., a meeting) to adapt their devices to support the activity. Today's devices are almost universally equipped with device-to-device communication that easily enables direct context sharing. While existing context sharing models tend not to consider devices' resource limitations or users' constraints, enabling devices to directly share context has significant benefits for efficiency, cost, and privacy. However, as we demonstrate quantitatively, when devices share context via device-to-device communication, it needs to be represented in a size-efficient way that does not sacrifice its expressiveness or accuracy. We present CHITCHAT, a suite of context representations that allows application developers to tune tradeoffs between the size of the representation, the flexibility of the application to update context information, the energy required to create and share context, and the quality of the information shared. We can substantially reduce the size of context representation (thereby reducing applications' overheads when they share their contexts with one another) with only a minimal reduction in the quality of shared contexts.",
"title": ""
},
{
"docid": "e056192e11fb6430ec1d3e64c2336df3",
"text": "Teleological explanations (TEs) account for the existence or properties of an entity in terms of a function: we have hearts because they pump blood, and telephones for communication. While many teleological explanations seem appropriate, others are clearly not warranted--for example, that rain exists for plants to grow. Five experiments explore the theoretical commitments that underlie teleological explanations. With the analysis of [Wright, L. (1976). Teleological Explanations. Berkeley, CA: University of California Press] from philosophy as a point of departure, we examine in Experiment 1 whether teleological explanations are interpreted causally, and confirm that TEs are only accepted when the function invoked in the explanation played a causal role in bringing about what is being explained. However, we also find that playing a causal role is not sufficient for all participants to accept TEs. Experiment 2 shows that this is not because participants fail to appreciate the causal structure of the scenarios used as stimuli. In Experiments 3-5 we show that the additional requirement for TE acceptance is that the process by which the function played a causal role must be general in the sense of conforming to a predictable pattern. These findings motivate a proposal, Explanation for Export, which suggests that a psychological function of explanation is to highlight information likely to subserve future prediction and intervention. We relate our proposal to normative accounts of explanation from philosophy of science, as well as to claims from psychology and artificial intelligence.",
"title": ""
},
{
"docid": "363236815299994c5d155ab2c64b4387",
"text": "The objective of this work is to infer the 3D shape of an object from a single image. We use sculptures as our training and test bed, as these have great variety in shape and appearance. To achieve this we build on the success of multiple view geometry (MVG) which is able to accurately provide correspondences between images of 3D objects under varying viewpoint and illumination conditions, and make the following contributions: first, we introduce a new loss function that can harness image-to-image correspondences to provide a supervisory signal to train a deep network to infer a depth map. The network is trained end-to-end by differentiating through the camera. Second, we develop a processing pipeline to automatically generate a large scale multi-view set of correspondences for training the network. Finally, we demonstrate that we can indeed obtain a depth map of a novel object from a single image for a variety of sculptures with varying shape/texture, and that the network generalises at test time to new domains (e.g. synthetic images).",
"title": ""
},
{
"docid": "9de7af8824594b5de7d510c81585c61b",
"text": "The adoption of business process improvement strategies is a challenge to organizations trying to improve the quality and productivity of their services. The quest for the benefits of this improvement on resource optimization and the responsiveness of the organizations has raised several proposals for business process improvement approaches. However, proposals and results of scientific research on process improvement in higher education institutions, extremely complex and unique organizations, are still scarce. This paper presents a method that provides guidance about how practices and knowledge are gathered to contribute for business process improvement based on the communication between different stakeholders.",
"title": ""
},
{
"docid": "85d4675562eb87550c3aebf0017e7243",
"text": "Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We introduce an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events. We describe a Web service that leverages this framework to track political memes in Twitter and help detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We present some cases of abusive behaviors uncovered by our service. Finally, we discuss promising preliminary results on the detection of suspicious memes via supervised learning based on features extracted from the topology of the diffusion networks, sentiment analysis, and crowdsourced annotations.",
"title": ""
},
{
"docid": "6b05fda194ac3a441a236de04bcc5fc2",
"text": "We have developed a humanoid robot (a cybernetic human called “HRP-4C”) which has the appearance and shape of a human being, can walk and move like one, and interacts with humans using speech recognition. Standing 158 cm tall and weighing 43 kg (including the battery), with the joints and dimensions set to average values for young Japanese females, HRP-4C looks very human-like. In this paper, we present ongoing challenges to create a new bussiness in the contents industry with HRP-4C.",
"title": ""
},
{
"docid": "719783be7139d384d24202688f7fc555",
"text": "Big sensing data is prevalent in both industry and scientific research applications where the data is generated with high volume and velocity. Cloud computing provides a promising platform for big sensing data processing and storage as it provides a flexible stack of massive computing, storage, and software services in a scalable manner. Current big sensing data processing on Cloud have adopted some data compression techniques. However, due to the high volume and velocity of big sensing data, traditional data compression techniques lack sufficient efficiency and scalability for data processing. Based on specific on-Cloud data compression requirements, we propose a novel scalable data compression approach based on calculating similarity among the partitioned data chunks. Instead of compressing basic data units, the compression will be conducted over partitioned data chunks. To restore original data sets, some restoration functions and predictions will be designed. MapReduce is used for algorithm implementation to achieve extra scalability on Cloud. With real world meteorological big sensing data experiments on U-Cloud platform, we demonstrate that the proposed scalable compression approach based on data chunk similarity can significantly improve data compression efficiency with affordable data accuracy loss.",
"title": ""
}
] |
scidocsrr
|
54adf43822992b99c36ee672469d3a22
|
Preventing drive-by download via inter-module communication monitoring
|
[
{
"docid": "5025766e66589289ccc31e60ca363842",
"text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.",
"title": ""
},
{
"docid": "a2c5e8f11a4ac8ff2ec1554d0a67ce1e",
"text": "Over the past few years, injection vulnerabilities have become the primary target for remote exploits. SQL injection, command injection, and cross-site scripting are some of the popular attacks that exploit these vulnerabilities. Taint-tracking has emerged as one of the most promising approaches for defending against these exploits, as it supports accurate detection (and prevention) of popular injection attacks. However, practical deployment of tainttracking defenses has been hampered by a number of factors, including: (a) high performance overheads (often over 100%), (b) the need for deep instrumentation, which has the potential to impact application robustness and stability, and (c) specificity to the language in which an application is written. In order to overcome these limitations, we present a new technique in this paper called taint inference. This technique does not require any source-code or binary instrumentation of the application to be protected; instead, it operates by intercepting requests and responses from this application. For most web applications, this interception may be achieved using network layer interposition or library interposition. We then develop a class of policies called syntaxand taint-aware policies that can accurately detect and/or block most injection attacks. An experimental evaluation shows that our techniques are effective in detecting a broad range of attacks on applications written in multiple languages (including PHP, Java and C), and impose low performance overheads (below 5%).",
"title": ""
}
] |
[
{
"docid": "d09b4b59c30925bae0983c7e56c3386d",
"text": "We describe a system that automatically extracts 3D geometry of an indoor scene from a single 2D panorama. Our system recovers the spatial layout by finding the floor, walls, and ceiling; it also recovers shapes of typical indoor objects such as furniture. Using sampled perspective sub-views, we extract geometric cues (lines, vanishing points, orientation map, and surface normals) and semantic cues (saliency and object detection information). These cues are used for ground plane estimation and occlusion reasoning. The global spatial layout is inferred through a constraint graph on line segments and planar superpixels. The recovered layout is then used to guide shape estimation of the remaining objects using their normal information. Experiments on synthetic and real datasets show that our approach is state-of-the-art in both accuracy and efficiency. Our system can handle cluttered scenes with complex geometry that are challenging to existing techniques.",
"title": ""
},
{
"docid": "4828e830d440cb7a2c0501952033da2f",
"text": "This paper presents a current-mode control non-inverting buck-boost converter. The proposed circuit is controlled by the current mode and operated in three operation modes which are buck, buck-boost, and boost mode. The operation mode is automatically determined by the ratio between the input and output voltages. The proposed circuit is simulated by HSPICE with 0.5 um standard CMOS parameters. Its input voltage range is 2.5–5 V, and the output voltage range is 1.5–5 V. The maximum efficiency is 92% when it operates in buck mode.",
"title": ""
},
{
"docid": "70ef4c1904d7d62a99e6c1dda53da095",
"text": "This position paper describes the initial research assumptions to improve music recommendations by including personality and emotional states. By including these psychological factors, we believe that the accuracy of the recommendation can be enhanced. We will give attention to how people use music to regulate their emotional states, and how this regulation is related to their personality. Furthermore, we will focus on how to acquire data from social media (i.e., microblogging sites such as Twitter) to predict the current emotional state of users. Finally, we will discuss how we plan to connect the correct emotionally laden music pieces to support the emotion regulation style of users.",
"title": ""
},
{
"docid": "41c317b0e275592ea9009f3035d11a64",
"text": "We introduce a distribution based model to learn bilingual word embeddings from monolingual data. It is simple, effective and does not require any parallel data or any seed lexicon. We take advantage of the fact that word embeddings are usually in form of dense real-valued lowdimensional vector and therefore the distribution of them can be accurately estimated. A novel cross-lingual learning objective is proposed which directly matches the distributions of word embeddings in one language with that in the other language. During the joint learning process, we dynamically estimate the distributions of word embeddings in two languages respectively and minimize the dissimilarity between them through standard back propagation algorithm. Our learned bilingual word embeddings allow to group each word and its translations together in the shared vector space. We demonstrate the utility of the learned embeddings on the task of finding word-to-word translations from monolingual corpora. Our model achieved encouraging performance on data in both related languages and substantially different languages.",
"title": ""
},
{
"docid": "ea3fd6ece19949b09fd2f5f2de57e519",
"text": "Multiple myeloma is the second most common hematologic malignancy. The treatment of this disease has changed considerably over the last two decades with the introduction to the clinical practice of novel agents such as proteasome inhibitors and immunomodulatory drugs. Basic research efforts towards better understanding of normal and missing immune surveillence in myeloma have led to development of new strategies and therapies that require the engagement of the immune system. Many of these treatments are under clinical development and have already started providing encouraging results. We, for the second time in the last two decades, are about to witness another shift of the paradigm in the management of this ailment. This review will summarize the major approaches in myeloma immunotherapies.",
"title": ""
},
{
"docid": "c99d2914a5da4bb66ab2d3c335e3dc3b",
"text": "A traditional paper-based passport contains a MachineReadable Zone (MRZ) and a Visual Inspection Zone (VIZ). The MRZ has two lines of the holder’s personal data, some document data, and verification characters encoded using the Optical Character Recognition font B (OCRB). The encoded data includes the holder’s name, date of birth, and other identifying information for the holder or the document. The VIZ contains the holder’s photo and signature, usually on the data page. However, the MRZ and VIZ can be easily duplicated with normal document reproduction technology to produce a fake passport which can pass traditional verification. Neither of these features actively verify the holder’s identity; nor do they bind the holder’s identity to the document. A passport also contains pages for stamps of visas and of country entry and exit dates, which can be easily altered to produce fake permissions and travel records. The electronic passport, supporting authentication using secure credentials on a tamper-resistant chip, is an attempt to improve on the security of the paper-based passport at minimum cost. This paper surveys the security mechanisms built into the firstgeneration of authentication mechanisms and compares them with second-generation passports. It analyzes and describes the cryptographic protocols used in Basic Access Control (BAC) and Extended Access Control (EAC).",
"title": ""
},
{
"docid": "370a2009695f1a18b2e6dbe6bc463bb0",
"text": "While automated vehicle technology progresses, potentially leading to a safer and more efficient traffic environment, many challenges remain within the area of human factors, such as user trust for automated driving (AD) vehicle systems. The aim of this paper is to investigate how an appropriate level of user trust for AD vehicle systems can be created via human–machine interaction (HMI). A guiding framework for implementing trust-related factors into the HMI interface is presented. This trust-based framework incorporates usage phases, AD events, trust-affecting factors, and levels explaining each event from a trust perspective. Based on the research findings, the authors recommend that HMI designers and automated vehicle manufacturers take a more holistic perspective on trust rather than focusing on single, “isolated” events, for example understanding that trust formation is a dynamic process that starts long before a user's first contact with the system, and continues long thereafter. Furthermore, factors-affecting trust change, both during user interactions with the system and over time; thus, HMI concepts need to be able to adapt. Future work should be dedicated to understanding how trust-related factors interact, as well as validating and testing the trust-based framework.",
"title": ""
},
{
"docid": "c4e7c757ad5a67b550d09f530b5204ef",
"text": "This paper describes our effort for a planning-based computational model of narrative generation that is designed to elicit surprise in the reader's mind, making use of two temporal narrative devices: flashback and foreshadowing. In our computational model, flashback provides a backstory to explain what causes a surprising outcome, while foreshadowing gives hints about the surprise before it occurs. Here, we present Prevoyant, a planning-based computational model of surprise arousal in narrative generation, and analyze the effectiveness of Prevoyant. The work here also presents a methodology to evaluate surprise in narrative generation using a planning-based approach based on the cognitive model of surprise causes. The results of the experiments that we conducted show strong support that Prevoyant effectively generates a discourse structure for surprise arousal in narrative.",
"title": ""
},
{
"docid": "bdee6c92bcc4437e2f4139078dde72b3",
"text": "In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the \"visual world\" eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., \"point at the candle\"). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions.",
"title": ""
},
{
"docid": "b48811ff3d90ddf174b6165f417357e5",
"text": "The evaluation of recommender systems is key to the successful application of recommender systems in practice. However, recommender-systems evaluation has received too little attention in the recommender-system community, in particular in the community of research-paper recommender systems. In this paper, we examine and discuss the appropriateness of different evaluation methods, i.e. offline evaluations, online evaluations, and user studies, in the context of research-paper recommender systems. We implemented different content-based filtering approaches in the research-paper recommender system of Docear. The approaches differed by the features to utilize (terms or citations), by user model size, whether stop-words were removed, and several other factors. The evaluations show that results from offline evaluations sometimes contradict results from online evaluations and user studies. We discuss potential reasons for the non-predictive power of offline evaluations, and discuss whether results of offline evaluations might have some inherent value. In the latter case, results of offline evaluations were worth to be published, even if they contradict results of user studies and online evaluations. However, although offline evaluations theoretically might have some inherent value, we conclude that in practice, offline evaluations are probably not suitable to evaluate recommender systems, particularly in the domain of research paper recommendations. We further analyze and discuss the appropriateness of several online evaluation metrics such as click-through rate, linkthrough rate, and cite-through rate.",
"title": ""
},
{
"docid": "b9e6ba77023f57aeddc83049284f220b",
"text": "Routing anomalies are a common occurrence on Today’s Internet. Given the vast size of the Internet, detecting such anomalies requires having a large set of vantage points from which to be able to schedule detection tests. An initiative of the RIPE NCC, RIPE Atlas is a globally distributed Internet measurement system that offers a favorable number vantage points. The project examines whether the technical capabilities of RIPE Atlas can be instrumented for the detection of three types of routing anomalies, namely Debogon filtering, Internet censorship and BGP prefix hijacking. By examining existing methodologies for detecting routing anomalies, the project defines a number of tests in RIPE Atlas. The tests examine whether RIPE Atlas is a viable replacement to current detection system and what limitations it presents.",
"title": ""
},
{
"docid": "3738d3c5d5bf4a3de55aa638adac07bb",
"text": "The term malware stands for malicious software. It is a program installed on a system without the knowledge of owner of the system. It is basically installed by the third party with the intention to steal some private data from the system or simply just to play pranks. This in turn threatens the computer’s security, wherein computer are used by one’s in day-to-day life as to deal with various necessities like education, communication, hospitals, banking, entertainment etc. Different traditional techniques are used to detect and defend these malwares like Antivirus Scanner (AVS), firewalls, etc. But today malware writers are one step forward towards then Malware detectors. Day-by-day they write new malwares, which become a great challenge for malware detectors. This paper focuses on basis study of malwares and various detection techniques which can be used to detect malwares.",
"title": ""
},
{
"docid": "d86633f3add015ffc7de96cb4a6e3802",
"text": "Summary • Animator and model checker for B Methode • Model & constrained based checker • ProB findes correct values for operation arguments • ProB enables user to uncover errors in specifications",
"title": ""
},
{
"docid": "0c45c5ee2433578fbc29d29820042abe",
"text": "When Andrew John Wiles was 10 years old, he read Eric Temple Bell’s The Last Problem and was so impressed by it that he decided that he would be the first person to prove Fermat’s Last Theorem. This theorem states that there are no nonzero integers a, b, c, n with n > 2 such that an + bn = cn. This object of this paper is to prove that all semistable elliptic curves over the set of rational numbers are modular. Fermat’s Last Theorem follows as a corollary by virtue of work by Frey, Serre and Ribet.",
"title": ""
},
{
"docid": "c51e1b845d631e6d1b9328510ef41ea0",
"text": "Accurate interference models are important for use in transmission scheduling algorithms in wireless networks. In this work, we perform extensive modeling and experimentation on two 20-node TelosB motes testbeds -- one indoor and the other outdoor -- to compare a suite of interference models for their modeling accuracies. We first empirically build and validate the physical interference model via a packet reception rate vs. SINR relationship using a measurement driven method. We then similarly instantiate other simpler models, such as hop-based, range-based, protocol model, etc. The modeling accuracies are then evaluated on the two testbeds using transmission scheduling experiments. We observe that while the physical interference model is the most accurate, it is still far from perfect, providing a 90-percentile error about 20-25% (and 80 percentile error 7-12%), depending on the scenario. The accuracy of the other models is worse and scenario-specific. The second best model trails the physical model by roughly 12-18 percentile points for similar accuracy targets. Somewhat similar throughput performance differential between models is also observed when used with greedy scheduling algorithms. Carrying on further, we look closely into the the two incarnations of the physical model -- 'thresholded' (conservative, but typically considered in literature) and 'graded' (more realistic). We show via solving the one shot scheduling problem, that the graded version can improve `expected throughput' over the thresholded version by scheduling imperfect links.",
"title": ""
},
{
"docid": "e9c450173afdd9aa329e290a18dafac8",
"text": "The gap between domain experts and natural language processing expertise is a barrier to extracting understanding from clinical text. We describe a prototype tool for interactive review and revision of natural language processing models of binary concepts extracted from clinical notes. We evaluated our prototype in a user study involving 9 physicians, who used our tool to build and revise models for 2 colonoscopy quality variables. We report changes in performance relative to the quantity of feedback. Using initial training sets as small as 10 documents, expert review led to final F1scores for the \"appendiceal-orifice\" variable between 0.78 and 0.91 (with improvements ranging from 13.26% to 29.90%). F1for \"biopsy\" ranged between 0.88 and 0.94 (-1.52% to 11.74% improvements). The average System Usability Scale score was 70.56. Subjective feedback also suggests possible design improvements.",
"title": ""
},
{
"docid": "2a818337c472caa1e693edb05722954b",
"text": "UNLABELLED\nThis study focuses on the relationship between classroom ventilation rates and academic achievement. One hundred elementary schools of two school districts in the southwest United States were included in the study. Ventilation rates were estimated from fifth-grade classrooms (one per school) using CO(2) concentrations measured during occupied school days. In addition, standardized test scores and background data related to students in the classrooms studied were obtained from the districts. Of 100 classrooms, 87 had ventilation rates below recommended guidelines based on ASHRAE Standard 62 as of 2004. There is a linear association between classroom ventilation rates and students' academic achievement within the range of 0.9-7.1 l/s per person. For every unit (1 l/s per person) increase in the ventilation rate within that range, the proportion of students passing standardized test (i.e., scoring satisfactory or above) is expected to increase by 2.9% (95%CI 0.9-4.8%) for math and 2.7% (0.5-4.9%) for reading. The linear relationship observed may level off or change direction with higher ventilation rates, but given the limited number of observations, we were unable to test this hypothesis. A larger sample size is needed for estimating the effect of classroom ventilation rates higher than 7.1 l/s per person on academic achievement.\n\n\nPRACTICAL IMPLICATIONS\nThe results of this study suggest that increasing the ventilation rates toward recommended guideline ventilation rates in classrooms should translate into improved academic achievement of students. More studies are needed to fully understand the relationships between ventilation rate, other indoor environmental quality parameters, and their effects on students' health and achievement. Achieving the recommended guidelines and pursuing better understanding of the underlying relationships would ultimately support both sustainable and productive school environments for students and personnel.",
"title": ""
},
{
"docid": "b7dbf710a191e51dc24619b2a520cf31",
"text": "This work addresses the problem of estimating the full body 3D human pose and shape from a single color image. This is a task where iterative optimization-based solutions have typically prevailed, while Convolutional Networks (ConvNets) have suffered because of the lack of training data and their low resolution 3D predictions. Our work aims to bridge this gap and proposes an efficient and effective direct prediction method based on ConvNets. Central part to our approach is the incorporation of a parametric statistical body shape model (SMPL) within our end-to-end framework. This allows us to get very detailed 3D mesh results, while requiring estimation only of a small number of parameters, making it friendly for direct network prediction. Interestingly, we demonstrate that these parameters can be predicted reliably only from 2D keypoints and masks. These are typical outputs of generic 2D human analysis ConvNets, allowing us to relax the massive requirement that images with 3D shape ground truth are available for training. Simultaneously, by maintaining differentiability, at training time we generate the 3D mesh from the estimated parameters and optimize explicitly for the surface using a 3D per-vertex loss. Finally, a differentiable renderer is employed to project the 3D mesh to the image, which enables further refinement of the network, by optimizing for the consistency of the projection with 2D annotations (i.e., 2D keypoints or masks). The proposed approach outperforms previous baselines on this task and offers an attractive solution for direct prediction of3D shape from a single color image.",
"title": ""
},
{
"docid": "cbc6986bf415292292b7008ae4d13351",
"text": "In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.",
"title": ""
},
{
"docid": "c30e938b57863772e8c7bc0085d22f71",
"text": "Game theory is a set of tools developed to model interactions between agents with conflicting interests, and is thus well-suited to address some problems in communications systems. In this paper we present some of the basic concepts of game theory and show why it is an appropriate tool for analyzing some communication problems and providing insights into how communication systems should be designed. We then provided a detailed example in which game theory is applied to the power control problem in a",
"title": ""
}
] |
scidocsrr
|
341eb801bed6bcd32bd441a2473b0d1b
|
Modeling Reportable Events as Turning Points in Narrative
|
[
{
"docid": "6fe77035a5101f60968a189d648e2feb",
"text": "In the past few years, Reddit -- a community-driven platform for submitting, commenting and rating links and text posts -- has grown exponentially, from a small community of users into one of the largest online communities on the Web. To the best of our knowledge, this work represents the most comprehensive longitudinal study of Reddit's evolution to date, studying both (i) how user submissions have evolved over time and (ii) how the community's allocation of attention and its perception of submissions have changed over 5 years based on an analysis of almost 60 million submissions. Our work reveals an ever-increasing diversification of topics accompanied by a simultaneous concentration towards a few selected domains both in terms of posted submissions as well as perception and attention. By and large, our investigations suggest that Reddit has transformed itself from a dedicated gateway to the Web to an increasingly self-referential community that focuses on and reinforces its own user-generated image- and textual content over external sources.",
"title": ""
}
] |
[
{
"docid": "b9538c45fc55caff8b423f6ecc1fe416",
"text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.",
"title": ""
},
{
"docid": "73605f6707ea453b6a03ba30ad7a645c",
"text": "The zebra finch is an important model organism in several fields with unique relevance to human neuroscience. Like other songbirds, the zebra finch communicates through learned vocalizations, an ability otherwise documented only in humans and a few other animals and lacking in the chicken—the only bird with a sequenced genome until now. Here we present a structural, functional and comparative analysis of the genome sequence of the zebra finch (Taeniopygia guttata), which is a songbird belonging to the large avian order Passeriformes. We find that the overall structures of the genomes are similar in zebra finch and chicken, but they differ in many intrachromosomal rearrangements, lineage-specific gene family expansions, the number of long-terminal-repeat-based retrotransposons, and mechanisms of sex chromosome dosage compensation. We show that song behaviour engages gene regulatory networks in the zebra finch brain, altering the expression of long non-coding RNAs, microRNAs, transcription factors and their targets. We also show evidence for rapid molecular evolution in the songbird lineage of genes that are regulated during song experience. These results indicate an active involvement of the genome in neural processes underlying vocal communication and identify potential genetic substrates for the evolution and regulation of this behaviour.",
"title": ""
},
{
"docid": "aa3fca665c7a306267cba71e977e54df",
"text": "Recursive neural networks are conceived for processing graphs and extend the well-known recurrent model for processing sequences. In Frasconi et al. (1998), recursive neural networks can deal only with directed ordered acyclic graphs (DOAGs), in which the children of any given node are ordered. While this assumption is reasonable in some applications, it introduces unnecessary constraints in others. In this paper, it is shown that the constraint on the ordering can be relaxed by using an appropriate weight sharing, that guarantees the independence of the network output with respect to the permutations of the arcs leaving from each node. The method can be used with graphs having low connectivity and, in particular, few outcoming arcs. Some theoretical properties of the proposed architecture are given. They guarantee that the approximation capabilities are maintained, despite the weight sharing.",
"title": ""
},
{
"docid": "1d03d6f7cd7ff9490dec240a36bf5f65",
"text": "Responses generated by neural conversational models tend to lack informativeness and diversity. We present a novel adversarial learning method, called Adversarial Information Maximization (AIM) model, to address these two related but distinct problems. To foster response diversity, we leverage adversarial training that allows distributional matching of synthetic and real responses. To improve informativeness, we explicitly optimize a variational lower bound on pairwise mutual information between query and response. Empirical results from automatic and human evaluations demonstrate that our methods significantly boost informativeness and diversity.",
"title": ""
},
{
"docid": "0dc3a616cf2d9c4dac08cbe94bbbed0e",
"text": "Digital news with a variety topics is abundant on the internet. The problem is to classify news based on its appropriate category to facilitate user to find relevant news rapidly. Classifier engine is used to split any news automatically into the respective category. This research employs Support Vector Machine (SVM) to classify Indonesian news. SVM is a robust method to classify binary classes. The core processing of SVM is in the formation of an optimum separating plane to separate the different classes. For multiclass problem, a mechanism called one against one is used to combine the binary classification result. Documents were taken from the Indonesian digital news site, www.kompas.com. The experiment showed a promising result with the accuracy rate of 85%. This system is feasible to be implemented on Indonesian news classification. Keywords—classification, Indonesian news, text processing, support vector machine",
"title": ""
},
{
"docid": "dcd888a5eb22f18249f1528da837a1bc",
"text": "This paper uses a fault diagnosis methodology based on bond graph to detect and isolate faults in electromechanical actuator (EMA). Firstly, transform the EMA behavioural model into a diagnostic model, and then the analytical redundancy relations (ARRs) for fault detection can be derived from this model. Finally, a fault signature matrix (FSM) for fault isolation is established. Simulation results show the successful detection & isolation of free-run fault in EMA using the proposed method.",
"title": ""
},
{
"docid": "60b3fc1ce3ac8c053ff98c0861b5984a",
"text": "It is now widely recognised that constructing a domain model, or ontology, is an important step in the development of knowledge based systems. What is lacking, however, is a clear understanding of how to build ontologies. In this paper we survey the work which has been done so far in beginning to provide a methodology for building ontologies. This work is still formative, and relies heavily on particular experiences. We also provide some discussion of this work, and identify the key issues that must be addressed if we are to move on from ontology construction being an art and to make it an understood engineering process.",
"title": ""
},
{
"docid": "47baaddefd3476ce55d39a0f111ade5a",
"text": "We propose a novel method for classifying resume data of job applicants into 27 different job categories using convolutional neural networks. Since resume data is costly and hard to obtain due to its sensitive nature, we use domain adaptation. In particular, we train a classifier on a large number of freely available job description snippets and then use it to classify resume data. We empirically verify a reasonable classification performance of our approach despite having only a small amount of labeled resume data available.",
"title": ""
},
{
"docid": "8f6682ddcc435c95ae3ef35ebb84de7f",
"text": "A series of 59 patients was treated and operated on for pain felt over the area of the ischial tuberosity and radiating down the back of the thigh. This condition was labeled as the \"hamstring syndrome.\" Pain was typically incurred by assuming a sitting position, stretching the affected posterior thigh, and running fast. The patients usually had a history of recurrent hamstring \"tears.\" Their symptoms were caused by the tight, tendinous structures of the lateral insertion area of the hamstring muscles to the ischial tuberosity. Upon division of these structures, complete relief was obtained in 52 of the 59 patients.",
"title": ""
},
{
"docid": "40df4f2d0537bca3cf92dc3005d2b9f3",
"text": "The pages of this Sample Chapter may have slight variations in final published form. H istorically, we talk of first-force psychodynamic, second-force cognitive-behavioral, and third-force existential-humanistic counseling and therapy theories. Counseling and psychotherapy really began with Freud and psychoanalysis. James Watson and, later, B. F. Skinner challenged Freud's emphasis on the unconscious and focused on observable behavior. Carl Rogers, with his person-centered counseling, revolutionized the helping professions by focusing on the importance of nurturing a caring therapist-client relationship in the helping process. All three approaches are still alive and well in the fields of counseling and psychology, as discussed in Chapters 5 through 10. As you reflect on the new knowledge and skills you exercised by reading the preceding chapters and completing the competency-building activities in those chapters, hopefully you part three 319 will see that you have gained a more sophisticated foundational understanding of the three traditional theoretical forces that have shaped the fields of counseling and therapy over the past one hundred years. Efforts in this book have been intended to bring your attention to both the strengths and limitations of psychodynamic, cognitive-behavioral, and existential-humanistic perspectives. With these perspectives in mind, the following chapters examine the fourth major theoretical force that has emerged in the mental health professions over the past 40 years: the multicultural-feminist-social justice counseling world-view. The perspectives of the fourth force challenge you to learn new competencies you will need to acquire to work effectively, respectfully, and ethically in a culturally diverse 21st-century society. Part Three begins by discussing the rise of the feminist counseling and therapy perspective (Chapter 11) and multicultural counseling and therapy (MCT) theories (Chapter 12). To assist you in synthesizing much of the information contained in all of the preceding chapters, Chapter 13 presents a comprehensive and integrative helping theory referred to as developmental counseling and therapy (DCT). Chapter 14 offers a comprehensive examination of family counseling and therapy theories to further extend your knowledge of ways that mental health practitioners can assist entire families in realizing new and untapped dimensions of their collective well-being. Finally Chapter 15 provides guidelines to help you develop your own approach to counseling and therapy that complements a growing awareness of your own values, biases, preferences, and relational compe-tencies as a mental health professional. Throughout, competency-building activities offer you opportunities to continue to exercise new skills associated with the different theories discussed in Part Three. …",
"title": ""
},
{
"docid": "f122373d44be16dadd479c75cca34a2a",
"text": "This paper presents the design, fabrication, and evaluation of a novel type of valve that uses an electropermanent magnet [1]. This valve is then used to build actuators for a soft robot. The developed EPM valves require only a brief (5 ms) pulse of current to turn flow on or off for an indefinite period of time. EPMvalves are characterized and demonstrated to be well suited for the control of elastomer fluidic actuators. The valves drive the pressurization and depressurization of fluidic channels within soft actuators. Furthermore, the forward locomotion of a soft, multi-actuator rolling robot is driven by EPM valves. The small size and energy-efficiency of EPM valves may make them valuable in soft mobile robot applications.",
"title": ""
},
{
"docid": "18b32aa0ffd8a3a7b84f9768d57b5cde",
"text": "In this paper we propose a recognition system of medical concepts from free text clinical reports. Our approach tries to recognize also concepts which are named with local terminology, with medical writing scripts, short words, abbreviations and even spelling mistakes. We consider a clinical terminology ontology (Snomed-CT), as a dictionary of concepts. In a first step we obtain an embedding model using word2vec methodology from a big corpus database of clinical reports. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the space, and so the geometrical similarity can be considered a measure of semantic relation. We have considered 615513 emergency clinical reports from the Hospital \"Rafael Méndez\" in Lorca, Murcia. In these reports there are a lot of local language of the emergency domain, medical writing scripts, short words, abbreviations and even spelling mistakes. With the model obtained we represent the words and sentences as vectors, and by applying cosine similarity we identify which concepts of the ontology are named in the text. Finally, we represent the clinical reports (EHR) like a bag of concepts, and use this representation to search similar documents. The paper illustrates 1) how we build the word2vec model from the free text clinical reports, 2) How we extend the embedding from words to sentences, and 3) how we use the cosine similarity to identify concepts. The experimentation, and expert human validation, shows that: a) the concepts named in the text with the ontology terminology are well recognized, and b) others concepts that are not named with the ontology terminology are also recognized, obtaining a high precision and recall measures.",
"title": ""
},
{
"docid": "f0f26a3a4d45d5a88c77dc45088176e7",
"text": "Advances in Physics Publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t713736250 A dynamical systems approach to mixing and segregation of granular materials in tumblers Steven W. Meier a; Richard M. Lueptow b; Julio M. Ottino ab a Department of Chemical and Biological Engineering, Northwestern University, Evanston, Illinois 60208, USA b Department of Mechanical Engineering, Northwestern University, Evanston, Illinois 60208, USA",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "6a99fc2cc909aff365b62e381859aebb",
"text": "thought (pp. 203–222). Cambridge, MA: MIT Press. Gentner, D., & Imai, M. (1992). Is the future always ahead? Evidence for system-mappings in understanding space-time metaphors. Proceedings of the Fourteenth Annual Meeting of the Cognitive Science Society (pp. 510–515). Gentner, D., Imai, M., & Boroditsky, L. (2002). As time goes by: Evidence for two systems in processing space > time metaphors. Language and Cognitive Processes, 17, 537–565. Gibbs, R. (1994). The poetics of mind. Cambridge, UK: Cambridge University Press. Gibbs, R. (1996). Why many concepts are metaphorical. Cognition, 61, 309–319. Gibbs, R., & Colston, H. (1995). The cognitive psychological reality of image-schemas and their transformations. Cognitive Linguistics, 6, 347–378. Glucksberg, S., Brown, M., & McGlone, M. (1993). Conceptual metaphors are not automatically accessed during idiom comprehension. Memory and Cognition, 21, 711–719. Johnson, M. (1987). The body in the mind: The bodily basis of meaning, imagination and reason. Chicago: University of Chicago Press. Lakoff, G. (1993). The contemporary theory of metaphor. In A. Ortony (Ed.), Metaphor and thought (2nd ed.; pp. 202–251). Cambridge, UK: Cambridge University Press. Lakoff, G., & Johnson, M. (1980). Metaphors we live by. Chicago: University of Chicago Press. Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh. New York: Basic Books. Lakoff, G., & Núñez, R. (2000). Where mathematics comes from: How the embodied mind brings mathematics into being. New York: Basic Books. Levinson, S. (2003). Space in language and cognition: Explorations in cognitive diversity. Cambridge, UK: Cambridge University Press. McGlone, M., & Harding, J. (1998). Back (or forward?) to the future: The role of perspective in temporal language comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 1211–1223. Moore, K. E. (2000). Spatial experience and temporal metaphors in Wolof: Point of view, conceptual mapping, and linguistic practice. Unpublished doctoral dissertation, University of California, Berkeley. Núñez, R. (1999). Could the future taste purple? In R. Núñez & W. Freeman (Eds.), Reclaiming cognition: The primacy of action, intention, and emotion (pp. 41–60). Thorverton, UK: Imprint Academic. Núñez, R., & Sweetser, E. (in press). Looking ahead to the past: Convergent evidence from Aymara language and gesture in the crosslinguistic comparison of spatial construals of time. Cognitive Science. Turner, M. (1987). Death is the mother of beauty: Mind, metaphor, criticism. Chicago: University of",
"title": ""
},
{
"docid": "65415effb35f9c8234f81fdef2916f42",
"text": "The scanpath comparison framework based on string editing is revisited. The previous method of clustering based on k-means \"preevaluation\" is replaced by the mean shift algorithm followed by elliptical modeling via Principal Components Analysis. Ellipse intersection determines cluster overlap, with fast nearest-neighbor search provided by the kd-tree. Subsequent construction of Y - matrices and parsing diagrams is fully automated, obviating prior interactive steps. Empirical validation is performed via analysis of eye movements collected during a variant of the Trail Making Test, where participants were asked to visually connect alphanumeric targets (letters and numbers). The observed repetitive position similarity index matches previously published results, providing ongoing support for the scanpath theory (at least in this situation). Task dependence of eye movements may be indicated by the global position index, which differs considerably from past results based on free viewing.",
"title": ""
},
{
"docid": "8d2b28892efc5cf4ab228fc599f5e91f",
"text": "Will reading habit influence your life? Many say yes. Reading cooperative control of distributed multi agent systems is a good habit; you can develop this habit to be such interesting way. Yeah, reading habit will not only make you have any favourite activity. It will be one of guidance of your life. When reading has become a habit, you will not make it as disturbing activities or as boring activity. You can gain many benefits and importances of reading.",
"title": ""
},
{
"docid": "42b1052a0d1e1536228b1b90602051ea",
"text": "Improving the quality of healthcare and the prospects of \"aging in place\" using wireless sensor technology requires solving difficult problems in scale, energy management, data access, security, and privacy. We present AlarmNet, a novel system for assisted living and residential monitoring that uses a two-way flow of data and analysis between the front- and back-ends to enable context-aware protocols that are tailored to residents' individual patterns of living. AlarmNet integrates environmental, physiological, and activity sensors in a scalable heterogeneous architecture. The SenQ query protocol provides real-time access to data and lightweight in-network processing. Circadian activity rhythm analysis learns resident activity patterns and feeds them back into the network to aid context-aware power management and dynamic privacy policies.",
"title": ""
},
{
"docid": "d18f9954bc8140fbf18e723f80523e8f",
"text": "A wideband circularly polarized reconfigurable patch antenna with L-shaped feeding probes is presented, which can generate unidirectional radiation performance that is switchable between left-hand circular polarization (LHCP) and right-hand circular polarization (RHCP). To realize this property, an L-probe fed square patch antenna is chosen as the radiator. A compact reconfigurable feeding network is implemented to excite the patch and generate either LHCP or RHCP over a wide operating bandwidth. The proposed antenna achieves the desired radiation patterns and has excellent characteristics, including a wide bandwidth, a compact structure, and a low profile. Measured results exhibit approximately identical performance for both polarization modes. Wide impedance, 31.6% from 1.2 to 1.65 GHz, and axial-ratio, 20.8% from 1.29 to 1.59 GHz, bandwidths are obtained. The gain is very stable across the entire bandwidth with a 6.9-dBic peak value. The reported circular-polarization reconfigurable antenna can mitigate the polarization mismatching problem in multipath wireless environments, increase the channel capacity of the system, and enable polarization coding.",
"title": ""
},
{
"docid": "18e5b72779f6860e2a0f2ec7251b0718",
"text": "This paper presents a novel dielectric resonator filter exploiting dual TM11 degenerate modes. The dielectric rod resonators are short circuited on the top and bottom surfaces to the metallic cavity. The dual-mode cavities can be conveniently arranged in many practical coupling configurations. Through-holes in height direction are made in each of the dielectric rods for the frequency tuning and coupling screws. All the coupling elements, including inter-cavity coupling elements, are accessible from the top of the filter cavity. This planar coupling configuration is very attractive for composing a diplexer or a parallel multifilter assembly using the proposed filter structure. To demonstrate the new filter technology, two eight-pole filters with cross-couplings for UMTS band are prototyped and tested. It has been experimentally shown that as compared to a coaxial combline filter with a similar unloaded Q, the proposed dual-mode filter can save filter volume by more than 50%. Moreover, a simple method that can effectively suppress the lower band spurious mode is also presented.",
"title": ""
}
] |
scidocsrr
|
b5c668555a40fb7c6bc55f058b329202
|
Translingual Mining from Text Data
|
[
{
"docid": "7fa92e07f76bcefc639ae807147b8d7b",
"text": "We present a novel method for discovering parallel sentences in comparable, non-parallel corpora. We train a maximum entropy classifier that, given a pair of sentences, can reliably determine whether or not they are translations of each other. Using this approach, we extract parallel data from large Chinese, Arabic, and English non-parallel newspaper corpora. We evaluate the quality of the extracted data by showing that it improves the performance of a state-of-the-art statistical machine translation system. We also show that a good-quality MT system can be built from scratch by starting with a very small parallel corpus (100,000 words) and exploiting a large non-parallel corpus. Thus, our method can be applied with great benefit to language pairs for which only scarce resources are available.",
"title": ""
},
{
"docid": "724388aac829af9671a90793b1b31197",
"text": "We present a statistical phrase-based translation model that useshierarchical phrases — phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.",
"title": ""
}
] |
[
{
"docid": "44c9de5fbaac78125277a9995890b43c",
"text": "In the real world, speech is usually distorted by both reverberation and background noise. In such conditions, speech intelligibility is degraded substantially, especially for hearing-impaired (HI) listeners. As a consequence, it is essential to enhance speech in the noisy and reverberant environment. Recently, deep neural networks have been introduced to learn a spectral mapping to enhance corrupted speech, and shown significant improvements in objective metrics and automatic speech recognition score. However, listening tests have not yet shown any speech intelligibility benefit. In this paper, we propose to enhance the noisy and reverberant speech by learning a mapping to reverberant target speech rather than anechoic target speech. A preliminary listening test was conducted, and the results show that the proposed algorithm is able to improve speech intelligibility of HI listeners in some conditions. Moreover, we develop a masking-based method for denoising and compare it with the spectral mapping method. Evaluation results show that the masking-based method outperforms the mapping-based method.",
"title": ""
},
{
"docid": "124fa48e1e842f2068a8fb55a2b8bb8e",
"text": "We present an augmented reality application for mechanics education. It utilizes a recent physics engine developed for the PC gaming market to simulate physical experiments in the domain of mechanics in real time. Students are enabled to actively build own experiments and study them in a three-dimensional virtual world. A variety of tools are provided to analyze forces, mass, paths and other properties of objects before, during and after experiments. Innovative teaching content is presented that exploits the strengths of our immersive virtual environment. PhysicsPlayground serves as an example of how current technologies can be combined to deliver a new quality in physics education.",
"title": ""
},
{
"docid": "f1dd866b1cdd79716f2bbc969c77132a",
"text": "Fiber optic sensor technology offers the possibility of sensing different parameters like strain, temperature, pressure in harsh environment and remote locations. these kinds of sensors modulates some features of the light wave in an optical fiber such an intensity and phase or use optical fiber as a medium for transmitting the measurement information. The advantages of fiber optic sensors in contrast to conventional electrical ones make them popular in different applications and now a day they consider as a key component in improving industrial processes, quality control systems, medical diagnostics, and preventing and controlling general process abnormalities. This paper is an introduction to fiber optic sensor technology and some of the applications that make this branch of optic technology, which is still in its early infancy, an interesting field. Keywords—Fiber optic sensors, distributed sensors, sensor application, crack sensor.",
"title": ""
},
{
"docid": "47b4b22cee9d5693c16be296afe61982",
"text": "In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2% or less) of the video frames.",
"title": ""
},
{
"docid": "6c4a7a6d21c85f3f2f392fbb1621cc51",
"text": "The International Academy of Education (IAE) is a not-for-profit scientific association that promotes educational research, and its dissemination and implementation. Founded in 1986, the Academy is dedicated to strengthening the contributions of research, solving critical educational problems throughout the world, and providing better communication among policy makers, researchers, and practitioners. The general aim of the IAE is to foster scholarly excellence in all fields of education. Towards this end, the Academy provides timely syntheses of research-based evidence of international importance. The Academy also provides critiques of research and of its evidentiary basis and its application to policy. This booklet about teacher professional learning and development has been prepared for inclusion in the Educational Practices Series developed by the International Academy of Education and distributed by the International Bureau of Education and the Academy. As part of its mission, the Academy provides timely syntheses of research on educational topics of international importance. This is the eighteenth in a series of booklets on educational practices that generally improve learning. This particular booklet is based on a synthesis of research evidence produced for the New Zealand Ministry of Education's Iterative Best Evidence Synthesis (BES) Programme, which is designed to be a catalyst for systemic improvement and sustainable development in education. This synthesis, and others in the series, are available electronically at www.educationcounts.govt.nz/themes/BES. All BESs are written using a collaborative approach that involves the writers, teacher unions, principal groups, teacher educators, academics, researchers, policy advisers, and other interested parties. To ensure its rigour and usefulness, each BES follows national guidelines developed by the Ministry of Education. Professor Helen Timperley was lead writer for the Teacher Professional Learning and Development: Best Evidence Synthesis Iteration [BES], assisted by teacher educators Aaron Wilson and Heather Barrar and research assistant Irene Fung, all of the University of Auckland. The BES is an analysis of 97 studies of professional development that led to improved outcomes for the students of the participating teachers. Most of these studies came from the United States, New Zealand, the Netherlands, the United Kingdom, Canada, and Israel. Dr Lorna Earl provided formative quality assurance for the synthesis; Professor John Hattie and Dr Gavin Brown oversaw the analysis of effect sizes. Helen Timperley is Professor of Education at the University of Auckland. The primary focus of her research is promotion of professional and organizational learning in schools for the purpose of improving student learning. She has …",
"title": ""
},
{
"docid": "e08914f566fde1dd91a5270d0e12d886",
"text": "Automation in agriculture system is very important these days. This paper proposes an automated system for irrigating the fields. ESP-8266 WIFI module chip is used to connect the system to the internet. Various types of sensors are used to check the content of moisture in the soil, and the water is supplied to the soil through the motor pump. IOT is used to inform the farmers of the supply of water to the soil through an android application. Every time water is given to the soil, the farmer will get to know about that.",
"title": ""
},
{
"docid": "79de6591c4d7bc26d2f2eea2f2b19756",
"text": "This paper presents a MOOC-ready online FPGA laboratory platform which targets computer system experiments. Goal of design is to provide user with highly approximate experience and results as offline experiments. Rich functions are implemented by utilizing SoC FPGA as the controller of lab board. The design details and effects are discussed in this paper.",
"title": ""
},
{
"docid": "c4f9c924963cadc658ad9c97560ea252",
"text": "A novel broadband circularly polarized (CP) antenna is proposed. The operating principle of this CP antenna is different from those of conventional CP antennas. An off-center-fed dipole is introduced to achieve the 90° phase difference required for circular polarization. The new CP antenna consists of two off-center-fed dipoles. Combining such two new CP antennas leads to a bandwidth enhancement for circular polarization. A T-shaped microstrip probe is used to excite the broadband CP antenna, featuring a simple planar configuration. It is shown that the new broadband CP antenna achieves an axial ratio (AR) bandwidth of 55% (1.69-3.0 GHz) for AR <; 3 dB, an impedance bandwidth of 60% (1.7-3.14 GHz) for return loss (RL) > 15 dB, and an antenna gain of 6-9 dBi. The new mechanism for circular polarization is described and an experimental verification is presented.",
"title": ""
},
{
"docid": "408ef85850165cb8ffa97811cb5dc957",
"text": "Inspired by the recent development of deep network-based methods in semantic image segmentation, we introduce an end-to-end trainable model for face mask extraction in video sequence. Comparing to landmark-based sparse face shape representation, our method can produce the segmentation masks of individual facial components, which can better reflect their detailed shape variations. By integrating convolutional LSTM (ConvLSTM) algorithm with fully convolutional networks (FCN), our new ConvLSTM-FCN model works on a per-sequence basis and takes advantage of the temporal correlation in video clips. In addition, we also propose a novel loss function, called segmentation loss, to directly optimise the intersection over union (IoU) performances. In practice, to further increase segmentation accuracy, one primary model and two additional models were trained to focus on the face, eyes, and mouth regions, respectively. Our experiment shows the proposed method has achieved a 16.99% relative improvement (from 54.50 to 63.76% mean IoU) over the baseline FCN model on the 300 Videos in the Wild (300VW) dataset.",
"title": ""
},
{
"docid": "f4e1ed913d3fd6e82a1651944d7a6e4c",
"text": "The availability of massive data about sports activities offers nowadays the opportunity to quantify the relation between performance and success. In this work, we analyze more than 6,000 games and 10 million events in the six major European leagues and investigate the relation between team performance and success in soccer competitions. We discover that a team’s success in the national tournament is significantly related to its typical performance. Moreover, we observe that while victory and defeats can be explained by the team’s performance during a game, draws are difficult to describe with a machine learning approach. We then perform a simulation of an entire season of the six leagues where the outcome of every game is replaced by a synthetic outcome (victory, defeat, or draw) based on a machine learning model trained on the previous seasons. We find that the final rankings in the simulated tournaments are close to the actual rankings in the real tournaments, suggesting that a complex systems’ view on soccer has the potential of revealing hidden patterns regarding the relation between performance and success.",
"title": ""
},
{
"docid": "0610ec403ed86dd1cf2f84073b59cc37",
"text": "SQL injection attacks pose a serious threat to the security of Web applications because they can give attackers unrestricted access to databases that contain sensitive information. In this paper, we propose a new, highly automated approach for protecting existing Web applications against SQL injection. Our approach has both conceptual and practical advantages over most existing techniques. From the conceptual standpoint, the approach is based on the novel idea of positive tainting and the concept of syntax-aware evaluation. From the practical standpoint, our technique is at the same time precise and efficient and has minimal deployment requirements. The paper also describes wasp, a tool that implements our technique, and a set of studies performed to evaluate our approach. In the studies, we used our tool to protect several Web applications and then subjected them to a large and varied set of attacks and legitimate accesses. The evaluation was a complete success: wasp successfully and efficiently stopped all of the attacks without generating any false positives.",
"title": ""
},
{
"docid": "c974e6b4031fde2b8e1de3ade33caef4",
"text": "A large literature has considered predictability of the mean or volatility of stock returns but little is known about whether the distribution of stock returns more generally is predictable. We explore this issue in a quantile regression framework and consider whether a range of economic state variables are helpful in predicting different quantiles of stock returns representing left tails, right tails or shoulders of the return distribution. Many variables are found to have an asymmetric effect on the return distribution, affecting lower, central and upper quantiles very differently. Out-of-sample forecasts suggest that upper quantiles of the return distribution can be predicted by means of economic state variables although the center of the return distribution is more difficult to predict. Economic gains from utilizing information in time-varying quantile forecasts are demonstrated through portfolio selection and option trading experiments. ∗We thank Torben Andersen, Tim Bollerslev, Peter Christoffersen as well as seminar participants at HEC, University of Montreal, University of Toronto, Goldman Sachs and CREATES, University of Aarhus, for helpful comments.",
"title": ""
},
{
"docid": "7332f08a9447fd321f7e40609cfabfc0",
"text": "Requirements Engineering und Management gewinnen in allen Bereichen der Systementwicklung stetig an Bedeutung. Zusammenhänge zwischen der Qualität der Anforderungserhebung und des Projekterfolges, wie von der Standish Group im jährlich erscheinenden Chaos Report [Standish 2004] untersucht, sind den meisten ein Begriff. Bei der Erhebung von Anforderungen treten immer wieder ähnliche Probleme auf. Dabei spielen unterschiedliche Faktoren und Gegebenheiten eine Rolle, die beachtet werden müssen. Es gibt mehrere Möglichkeiten, die Tücken der Analysephase zu meistern; eine Hilfe bietet der Einsatz der in diesem Artikel vorgestellten Methoden zur Anforderungserhebung. Auch wenn die Anforderungen korrekt und vollständig erhoben sind, ist es eine Kunst, diese zu verwalten. In der heutigen Zeit der verteilten Projekte ist es eine Herausforderung, die Dokumentation für jeden Beteiligten ständig verfügbar, nachvollziehbar und eindeutig zu erhalten. Requirements Management rüstet den Analytiker mit Methoden aus, um sich dieser Herausforderung zu stellen. Änderungen von Stakeholder-Wünschen an bestehenden Anforderungen stellen besondere Ansprüche an das Requirements Management, doch mithilfe eines Change-Management-Prozesses können auch diese bewältigt werden. Metriken und Traceability unterstützen bei der Aufwandsabschätzung für Änderungsanträge.",
"title": ""
},
{
"docid": "fcacf1a443252652dfec05f7061784e1",
"text": "Small point lights (e.g., LEDs) are used as indicators in a wide variety of devices today, from digital watches and toasters, to washing machines and desktop computers. Although exceedingly simple in their output - varying light intensity over time - their design space can be rich. Unfortunately, a survey of contemporary uses revealed that the vocabulary of lighting expression in popular use today is small, fairly unimaginative, and generally ambiguous in meaning. In this paper, we work through a structured design process that points the way towards a much richer set of expressive forms and more effective communication for this very simple medium. In this process, we make use of five different data gathering and evaluation components to leverage the knowledge, opinions and expertise of people outside our team. Our work starts by considering what information is typically conveyed in this medium. We go on to consider potential expressive forms -- how information might be conveyed. We iteratively refine and expand these sets, concluding with ideas gathered from a panel of designers. Our final step was to make use of thousands of human judgments, gathered in a crowd-sourced fashion (265 participants), to measure the suitability of different expressive forms for conveying different information content. This results in a set of recommended light behaviors that mobile devices, such as smartphones, could readily employ.",
"title": ""
},
{
"docid": "6af7d655d12fb276f5db634f4fc7cb74",
"text": "The letter presents a compact 3-bit 90 ° phase shifter for phased-array applications at the 60 GHz ISM band (IEEE 802.11ad standard). The designed phase shifter is based on reflective-type topology using the proposed reflective loads with binary-weighted digitally-controlled varactor arrays and the transformer-type directional coupler. The measured eight output states of the implemented phase shifter in 65 nm CMOS technology, exhibit phase-resolution of 11.25 ° with an RMS phase error of 5.2 °. The insertion loss is 5.69 ± 1.22 dB at 60 GHz and the return loss is better than 12 dB over 54-66 GHz. The chip demonstrates a compact size of only 0.034 mm2.",
"title": ""
},
{
"docid": "a1dba8928f1a3b919b44dbd2ca8c3fb8",
"text": "With the increasing adoption of cloud computing, a growing number of users outsource their datasets to cloud. To preserve privacy, the datasets are usually encrypted before outsourcing. However, the common practice of encryption makes the effective utilization of the data difficult. For example, it is difficult to search the given keywords in encrypted datasets. Many schemes are proposed to make encrypted data searchable based on keywords. However, keyword-based search schemes ignore the semantic representation information of users’ retrieval, and cannot completely meet with users search intention. Therefore, how to design a content-based search scheme and make semantic search more effective and context-aware is a difficult challenge. In this paper, we propose ECSED, a novel semantic search scheme based on the concept hierarchy and the semantic relationship between concepts in the encrypted datasets. ECSED uses two cloud servers. One is used to store the outsourced datasets and return the ranked results to data users. The other one is used to compute the similarity scores between the documents and the query and send the scores to the first server. To further improve the search efficiency, we utilize a tree-based index structure to organize all the document index vectors. We employ the multi-keyword ranked search over encrypted cloud data as our basic frame to propose two secure schemes. The experiment results based on the real world datasets show that the scheme is more efficient than previous schemes. We also prove that our schemes are secure under the known ciphertext model and the known background model.",
"title": ""
},
{
"docid": "92cecd8329343bc3a9b0e46e2185eb1c",
"text": "The spondylo and spondylometaphyseal dysplasias (SMDs) are characterized by vertebral changes and metaphyseal abnormalities of the tubular bones, which produce a phenotypic spectrum of disorders from the mild autosomal-dominant brachyolmia to SMD Kozlowski to autosomal-dominant metatropic dysplasia. Investigations have recently drawn on the similar radiographic features of those conditions to define a new family of skeletal dysplasias caused by mutations in the transient receptor potential cation channel vanilloid 4 (TRPV4). This review demonstrates the significance of radiography in the discovery of a new bone dysplasia family due to mutations in a single gene.",
"title": ""
},
{
"docid": "441f80a25e7a18760425be5af1ab981d",
"text": "This paper proposes efficient algorithms for group sparse optimization with mixed `2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2,1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more difficult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2,1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.",
"title": ""
},
{
"docid": "5065387618c6eb389ef0efb503172c5a",
"text": "We present a new algorithm for the contextual bandit learning problem, where the learner repeatedly takes an action in response to the observed context, observing the reward only for that action. Our method assumes access to an oracle for solving cost-sensitive classification problems and achieves the statistically optimal regret guarantee with only Õ( √ T ) oracle calls across all T rounds. By doing so, we obtain the most practical contextual bandit learning algorithm amongst approaches that work for general policy classes. We further conduct a proof-of-concept experiment which demonstrates the excellent computational and prediction performance of (an online variant of) our algorithm relative to several baselines.",
"title": ""
},
{
"docid": "5e9a0d990a3b4fb075552346a11986c4",
"text": "The TinyTeRP is a centimeter-scale, modular wheeled robotic platform developed for the study of swarming or collective behavior. This paper presents the use of TinyTeRPs to implement collective recruitment and rendezvous to a fixed location using several RSSI-based gradient ascent algorithms. We also present a redesign of the wheelbased module with tank treads and a wider base, improving the robot’s mobility over uneven terrain and overall robustness. Lastly, we present improvements to the open source C libraries that allow users to easily implement high-level functions and closed-loop control on the TinyTeRP.",
"title": ""
}
] |
scidocsrr
|
f8de969b1dbdf28a99429403dda7302b
|
Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap
|
[
{
"docid": "786f1bbc10cfb952c7709b635ec01fcf",
"text": "Artificial neural networks (NN) have shown a significant promise in difficult tasks like image classification or speech recognition. Even well-optimized hardware implementations of digital NNs show significant power consumption. It is mainly due to non-uniform pipeline structures and inherent redundancy of numerous arithmetic operations that have to be performed to produce each single output vector. This paper provides a methodology for the design of well-optimized power-efficient NNs with a uniform structure suitable for hardware implementation. An error resilience analysis was performed in order to determine key constraints for the design of approximate multipliers that are employed in the resulting structure of NN. By means of a search based approximation method, approximate multipliers showing desired tradeoffs between the accuracy and implementation cost were created. Resulting approximate NNs, containing the approximate multipliers, were evaluated using standard benchmarks (MNIST dataset) and a real-world classification problem of Street-View House Numbers. Significant improvement in power efficiency was obtained in both cases with respect to regular NNs. In some cases, 91% power reduction of multiplication led to classification accuracy degradation of less than 2.80%. Moreover, the paper showed the capability of the back propagation learning algorithm to adapt with NNs containing the approximate multipliers.",
"title": ""
},
{
"docid": "f19f4d2c9e05f30e21d09ab41da9ec47",
"text": "Multilayered artificial neural networks have found widespread utility in classification and recognition applications. The scale and complexity of such networks together with the inadequacies of general purpose computing platforms have led to a significant interest in the development of efficient hardware implementations. In this work, we focus on designing energy-efficient on-chip storage for the synaptic weights, motivated primarily by the observation that the number of synapses is orders of magnitude larger than the number of neurons. Typical digital CMOS implementations of such large-scale networks are power hungry. In order to minimize the power consumption, the digital neurons could be operated reliably at scaled voltages by reducing the clock frequency. On the contrary, the on-chip synaptic storage designed using a conventional 6T SRAM is susceptible to bitcell failures at reduced voltages. However, the intrinsic error resiliency of neural networks to small synaptic weight perturbations enables us to scale the operating voltage of the 6T SRAM. Our analysis on a widely used digit recognition dataset indicates that the voltage can be scaled by 200 mV from the nominal operating voltage (950 mV) for practically no loss (less than 0.5%) in accuracy (22 nm predictive technology). Scaling beyond that causes substantial performance degradation owing to increased probability of failures in the MSBs of the synaptic weights. We, therefore propose a significance driven hybrid 8T-6T SRAM, wherein the sensitive MSBs are stored in 8T bitcells that are robust at scaled voltages due to decoupled read and write paths. In an effort to further minimize the area penalty, we present a synaptic-sensitivity driven hybrid memory architecture consisting of multiple 8T-6T SRAM banks. Our circuit to system-level simulation framework shows that the proposed synaptic-sensitivity driven architecture provides a 30.91% reduction in the memory access power with a 10.41% area overhead, for less than 1% loss in the classification accuracy.",
"title": ""
}
] |
[
{
"docid": "d0c5d24a5f68eb5448b45feeca098b87",
"text": "Age estimation has wide applications in video surveillance, social networking, and human-computer interaction. Many of the published approaches simply treat age estimation as an exact age regression problem, and thus do not leverage a distribution's robustness in representing labels with ambiguity such as ages. In this paper, we propose a new loss function, called mean-variance loss, for robust age estimation via distribution learning. Specifically, the mean-variance loss consists of a mean loss, which penalizes difference between the mean of the estimated age distribution and the ground-truth age, and a variance loss, which penalizes the variance of the estimated age distribution to ensure a concentrated distribution. The proposed mean-variance loss and softmax loss are jointly embedded into Convolutional Neural Networks (CNNs) for age estimation. Experimental results on the FG-NET, MORPH Album II, CLAP2016, and AADB databases show that the proposed approach outperforms the state-of-the-art age estimation methods by a large margin, and generalizes well to image aesthetics assessment.",
"title": ""
},
{
"docid": "04d9bc52997688b48e70e91a43a145ef",
"text": "Post-weaning social isolation (PSI) has been shown to increase aggressive behavior and alter medial prefrontal cortex (mPFC) function in social species such as rats. Here we developed a novel escapable social interaction test (ESIT) allowing for the quantification of escape and social behaviors in addition to mPFC activation in response to an aggressive or nonaggressive stimulus rat. Male rats were exposed to 3 weeks of PSI (ISO) or group (GRP) housing, and exposed to 3 trials, with either no trial, all trials, or the last trial only with a stimulus rat. Analysis of social behaviors indicated that ISO rats spent less time in the escape chamber and more time engaged in social interaction, aggressive grooming, and boxing than did GRP rats. Interestingly, during the third trial all rats engaged in more of the quantified social behaviors and spent less time escaping in response to aggressive but not nonaggressive stimulus rats. Rats exposed to nonaggressive stimulus rats on the third trial had greater c-fos and ARC immunoreactivity in the mPFC than those exposed to an aggressive stimulus rat. Conversely, a social encounter produced an increase in large PSD-95 punctae in the mPFC independently of trial number, but only in ISO rats exposed to an aggressive stimulus rat. The results presented here demonstrate that PSI increases interaction time and aggressive behaviors during escapable social interaction, and that the aggressiveness of the stimulus rat in a social encounter is an important component of behavioral and neural outcomes for both isolation and group-reared rats.",
"title": ""
},
{
"docid": "b267f474e8ff3ec6bcdadd9bfc58d771",
"text": "This paper elaborates the hypothesis that the unique demography and sociology of Ashkenazim in medieval Europe selected for intelligence. Ashkenazi literacy, economic specialization, and closure to inward gene flow led to a social environment in which there was high fitness payoff to intelligence, specifically verbal and mathematical intelligence but not spatial ability. As with any regime of strong directional selection on a quantitative trait, genetic variants that were otherwise fitness reducing rose in frequency. In particular we propose that the well-known clusters of Ashkenazi genetic diseases, the sphingolipid cluster and the DNA repair cluster in particular, increase intelligence in heterozygotes. Other Ashkenazi disorders are known to increase intelligence. Although these disorders have been attributed to a bottleneck in Ashkenazi history and consequent genetic drift, there is no evidence of any bottleneck. Gene frequencies at a large number of autosomal loci show that if there was a bottleneck then subsequent gene flow from Europeans must have been very large, obliterating the effects of any bottleneck. The clustering of the disorders in only a few pathways and the presence at elevated frequency of more than one deleterious allele at many of them could not have been produced by drift. Instead these are signatures of strong and recent natural selection.",
"title": ""
},
{
"docid": "522363d36c93b692265c42f9f3976461",
"text": "In this paper, we propose a novel semi-supervised approach for detecting profanity-related offensive content in Twitter. Our approach exploits linguistic regularities in profane language via statistical topic modeling on a huge Twitter corpus, and detects offensive tweets using automatically these generated features. Our approach performs competitively with a variety of machine learning (ML) algorithms. For instance, our approach achieves a true positive rate (TP) of 75.1% over 4029 testing tweets using Logistic Regression, significantly outperforming the popular keyword matching baseline, which has a TP of 69.7%, while keeping the false positive rate (FP) at the same level as the baseline at about 3.77%. Our approach provides an alternative to large scale hand annotation efforts required by fully supervised learning approaches.",
"title": ""
},
{
"docid": "e818b0a38d17a77cc6cfdee2761f12c4",
"text": "In this paper, we present improved lane tracking using vehicle localization. Lane markers are detected using a bank of steerable filters, and lanes are tracked using Kalman filtering. On-road vehicle detection has been achieved using an active learning approach, and vehicles are tracked using a Condensation particle filter. While most state-of-the art lane tracking systems are not capable of performing in high-density traffic scenes, the proposed framework exploits robust vehicle tracking to allow for improved lane tracking in high density traffic. Experimental results demonstrate that lane tracking performance, robustness, and temporal response are significantly improved in the proposed framework, while also tracking vehicles, with minimal additional hardware requirements.",
"title": ""
},
{
"docid": "1e7c1dfe168aec2353b31613811112ae",
"text": "A great video title describes the most salient event compactly and captures the viewer’s attention. In contrast, video captioning tends to generate sentences that describe the video as a whole. Although generating a video title automatically is a very useful task, it is much less addressed than video captioning. We address video title generation for the first time by proposing two methods that extend state-of-the-art video captioners to this new task. First, we make video captioners highlight sensitive by priming them with a highlight detector. Our framework allows for jointly training a model for title generation and video highlight localization. Second, we induce high sentence diversity in video captioners, so that the generated titles are also diverse and catchy. This means that a large number of sentences might be required to learn the sentence structure of titles. Hence, we propose a novel sentence augmentation method to train a captioner with additional sentence-only examples that come without corresponding videos. We collected a large-scale Video Titles in the Wild (VTW) dataset of 18100 automatically crawled user-generated videos and titles. On VTW, our methods consistently improve title prediction accuracy, and achieve the best performance in both automatic and human evaluation. Finally, our sentence augmentation method also outperforms the baselines on the M-VAD dataset.",
"title": ""
},
{
"docid": "94c48d169fffc2be925e06c44fe26797",
"text": "The classical approach to using utility functions suffers from the drawback of having to design and tweak the functions on a case by case basis. Inspired by examples from the animal kingdom, social sciences and games we propose empowerment, a rather universal function, defined as the information-theoretic capacity of an agent’s actuation channel. The concept applies to any sensorimotoric apparatus. Empowerment as a measure reflects the properties of the apparatus as long as they are observable due to the coupling of sensors and actuators via the environment.",
"title": ""
},
{
"docid": "38a57287afb6f9b619101fb5da630cab",
"text": "The purpose of this study was to calculate the seroprevalence of Trypanosoma cruzi infection in a sample of inhabitants from a region considered to be at high risk of natural transmission of Chagas disease in Colombia. A cross-sectional study was conducted in subjects from 5 municipalities, recruited in urban and rural locations, distributed by gender according to the demographic information available. Socio-demographic information, history of potential exposure to insect vectors, blood donating, as well as symptoms suggesting cardiac disease were collected using a questionnaire. After giving written informed consent, blood specimens were obtained from 486 people to determine the serologic evidence of past exposure to T. cruzi. Infection was diagnosed when two different tests (ELISA and IHA) were positive. The seroprevalence of antibodies against T. cruzi was 16.91% considering an estimated population of 44,355 aged between 15 and 89 years (95%IC: 13.72 to 20.01). The factors significantly associated with the infection were: 1- Housing materials like vegetable material, adobe or unfinished brick walls; 2- The fact of having previous tests for Chagas disease (regardless of the result). Of note, the mean ages among infected and not infected participants were significantly different (49.19 vs. 41.66, p ≤ 0.0001). Among the studied municipalities, the one with the highest frequency of T. cruzi infection was Nunchia, with 31.15% of the surveyed subjects. Therefore it may be concluded that T. cruzi infection is highly prevalent in the north region of Casanare, in Colombia.",
"title": ""
},
{
"docid": "0452cba63dfe7a89cc3cb5802fcfdd3e",
"text": "We show efficient algorithms for edge-coloring planar graphs. Our main result is a linear-time algorithm for coloring planar graphs with maximum degree Δ with max {Δ,9} colors. Thus the coloring is optimal for graphs with maximum degree Δ≥9. Moreover for Δ=4,5,6 we give linear-time algorithms that use Δ+2 colors. These results improve over the algorithms of Chrobak and Yung (J. Algorithms 10:35–51, 1989) and of Chrobak and Nishizeki (J. Algorithms 11:102–116, 1990) which color planar graphs using max {Δ,19} colors in linear time or using max {Δ,9} colors in $\\mathcal{O}(n\\log n)$ time.",
"title": ""
},
{
"docid": "38156f5376f9d4643ce451bddce78408",
"text": "Association rule mining is one of the most popular data mining methods. However, mining association rules often results in a very large number of found rules, leaving the analyst with the task to go through all the rules and discover interesting ones. Sifting manually through large sets of rules is time consuming and strenuous. Visualization has a long history of making large amounts of data better accessible using techniques like selecting and zooming. However, most association rule visualization techniques are still falling short when it comes to a large number of rules. In this paper we present a new interactive visualization technique which lets the user navigate through a hierarchy of groups of association rules. We demonstrate how this new visualization techniques can be used to analyze a large sets of association rules with examples from our implementation in the R-package arulesViz.",
"title": ""
},
{
"docid": "21321c82a296da3c8c1f0637e3bfc3e7",
"text": "We present a discrete distance transform in style of the vector propagation algorithm by Danielsson. Like other vector propagation algorithms, the proposed method is close to exact, i.e., the error can be strictly bounded from above and is significantly smaller than one pixel. Our contribution is that the algorithm runs entirely on consumer class graphics hardware, thereby achieving a throughput of up to 96 Mpixels/s. This allows the proposed method to be used in a wide range of applications that rely both on high speed and high quality.",
"title": ""
},
{
"docid": "1b7048c328414573f55cc4aed2744496",
"text": "Structural Health Monitoring (SHM) can be understood as the integration of sensing and intelligence to enable the structure loading and damage-provoking conditions to be recorded, analyzed, localized, and predicted in such a way that nondestructive testing becomes an integral part of them. In addition, SHM systems can include actuation devices to take proper reaction or correction actions. SHM sensing requirements are very well suited for the application of optical fiber sensors (OFS), in particular, to provide integrated, quasi-distributed or fully distributed technologies. In this tutorial, after a brief introduction of the basic SHM concepts, the main fiber optic techniques available for this application are reviewed, emphasizing the four most successful ones. Then, several examples of the use of OFS in real structures are also addressed, including those from the renewable energy, transportation, civil engineering and the oil and gas industry sectors. Finally, the most relevant current technical challenges and the key sector markets are identified. This paper provides a tutorial introduction, a comprehensive background on this subject and also a forecast of the future of OFS for SHM. In addition, some of the challenges to be faced in the near future are addressed.",
"title": ""
},
{
"docid": "9e70220bad6316cbfff90db8d5f80826",
"text": "Limits on the storage capacity of working memory significantly affect cognitive abilities in a wide range of domains, but the nature of these capacity limits has been elusive. Some researchers have proposed that working memory stores a limited set of discrete, fixed-resolution representations, whereas others have proposed that working memory consists of a pool of resources that can be allocated flexibly to provide either a small number of high-resolution representations or a large number of low-resolution representations. Here we resolve this controversy by providing independent measures of capacity and resolution. We show that, when presented with more than a few simple objects, human observers store a high-resolution representation of a subset of the objects and retain no information about the others. Memory resolution varied over a narrow range that cannot be explained in terms of a general resource pool but can be well explained by a small set of discrete, fixed-resolution representations.",
"title": ""
},
{
"docid": "57d3505a655e9c0efdc32101fd09b192",
"text": "POX is a Python based open source OpenFlow/Software Defined Networking (SDN) Controller. POX is used for faster development and prototyping of new network applications. POX controller comes pre installed with the mininet virtual machine. Using POX controller you can turn dumb openflow devices into hub, switch, load balancer, firewall devices. The POX controller allows easy way to run OpenFlow/SDN experiments. POX can be passed different parameters according to real or experimental topologies, thus allowing you to run experiments on real hardware, testbeds or in mininet emulator. In this paper, first section will contain introduction about POX, OpenFlow and SDN, then discussion about relationship between POX and Mininet. Final Sections will be regarding creating and verifying behavior of network applications in POX.",
"title": ""
},
{
"docid": "98c286ed333b19a8aa5c811ca4e03505",
"text": "Empirical evidence suggests that neural networks with ReLU activations generalize better with over-parameterization. However, there is currently no theoretical analysis that explains this observation. In this work, we study a simplified learning task with over-parameterized convolutional networks that empirically exhibits the same qualitative phenomenon. For this setting, we provide a theoretical analysis of the optimization and generalization performance of gradient descent. Specifically, we prove data-dependent sample complexity bounds which show that overparameterization improves the generalization performance of gradient descent.",
"title": ""
},
{
"docid": "589396a7c9dae0567f0bcd4d83461a6f",
"text": "The risk of inadequate hand hygiene in food handling settings is exacerbated when water is limited or unavailable, thereby making washing with soap and water difficult. The SaniTwice method involves application of excess alcohol-based hand sanitizer (ABHS), hand \"washing\" for 15 s, and thorough cleaning with paper towels while hands are still wet, followed by a standard application of ABHS. This study investigated the effectiveness of the SaniTwice methodology as an alternative to hand washing for cleaning and removal of microorganisms. On hands moderately soiled with beef broth containing Escherichia coli (ATCC 11229), washing with a nonantimicrobial hand washing product achieved a 2.86 (±0.64)-log reduction in microbial contamination compared with the baseline, whereas the SaniTwice method with 62 % ethanol (EtOH) gel, 62 % EtOH foam, and 70 % EtOH advanced formula gel achieved reductions of 2.64 ± 0.89, 3.64 ± 0.57, and 4.61 ± 0.33 log units, respectively. When hands were heavily soiled from handling raw hamburger containing E. coli, washing with nonantimicrobial hand washing product and antimicrobial hand washing product achieved reductions of 2.65 ± 0.33 and 2.69 ± 0.32 log units, respectively, whereas SaniTwice with 62 % EtOH foam, 70 % EtOH gel, and 70 % EtOH advanced formula gel achieved reductions of 2.87 ± 0.42, 2.99 ± 0.51, and 3.92 ± 0.65 log units, respectively. These results clearly demonstrate that the in vivo antibacterial efficacy of the SaniTwice regimen with various ABHS is equivalent to or exceeds that of the standard hand washing approach as specified in the U.S. Food and Drug Administration Food Code. Implementation of the SaniTwice regimen in food handling settings with limited water availability should significantly reduce the risk of foodborne infections resulting from inadequate hand hygiene.",
"title": ""
},
{
"docid": "ddaf60e511051f3b7e521c4a90f3f9cf",
"text": "The objective of this study was to determine the effects of formulation excipients and physical characteristics of inhalation particles on their in vitro aerosolization performance, and thereby to maximize their respirable fraction. Dry powders were produced by spray-drying using excipients that are FDA-approved for inhalation as lactose, materials that are endogenous to the lungs as albumin and dipalmitoylphosphatidylcholine (DPPC); and/or protein stabilizers as trehalose or mannitol. Dry powders suitable for deep lung deposition, i.e. with an aerodynamic diameter of individual particles <3 microm, were prepared. They presented 0.04--0.25 g/cm(3) bulk tap densities, 3--5 microm geometric particle sizes, up to 90% emitted doses and 50% respirable fractions in the Andersen cascade impactor using a Spinhaler inhaler device. The incorporation of lactose, albumin and DPPC in the formulation all improved the aerosolization properties, in contrast to trehalose and the mannitol which decreased powder flowability. The relative proportion of the excipients affected aerosol performance as well. The lower the bulk powder tap density, the higher the respirable fraction. Optimization of in vitro aerosolization properties of inhalation dry powders can be achieved by appropriately selecting composition and physical characteristics of the particles.",
"title": ""
},
{
"docid": "9415adaa3ec2f7873a23cc2017a2f1ee",
"text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.",
"title": ""
},
{
"docid": "a425d9e6cc296f136d6b9ba77320c4e6",
"text": "BACKGROUND\nComputer game addiction is excessive or compulsive use of computer and video games that may interfere with daily life. It is not clear whether video game playing meets diagnostic criteria for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV).\n\n\nOBJECTIVES\nFirst objective is to review the literature on computer and video game addiction over the topics of diagnosis, phenomenology, epidemiology, and treatment. Second objective is to describe a brain imaging study measuring dopamine release during computer game playing.\n\n\nMETHODS\nArticle search of 15 published articles between 2000 and 2009 in Medline and PubMed on computer and video game addiction. Nine abstinent \"ecstasy\" users and 8 control subjects were scanned at baseline and after performing on a motorbike riding computer game while imaging dopamine release in vivo with [123I] IBZM and single photon emission computed tomography (SPECT).\n\n\nRESULTS\nPsycho-physiological mechanisms underlying computer game addiction are mainly stress coping mechanisms, emotional reactions, sensitization, and reward. Computer game playing may lead to long-term changes in the reward circuitry that resemble the effects of substance dependence. The brain imaging study showed that healthy control subjects had reduced dopamine D2 receptor occupancy of 10.5% in the caudate after playing a motorbike riding computer game compared with baseline levels of binding consistent with increased release and binding to its receptors. Ex-chronic \"ecstasy\" users showed no change in levels of dopamine D2 receptor occupancy after playing this game.\n\n\nCONCLUSION\nThis evidence supports the notion that psycho-stimulant users have decreased sensitivity to natural reward.\n\n\nSIGNIFICANCE\nComputer game addicts or gamblers may show reduced dopamine response to stimuli associated with their addiction presumably due to sensitization.",
"title": ""
},
{
"docid": "f5b02bdd74772ff2454a475e44077c8e",
"text": "This paper presents a new method - adversarial advantage actor-critic (Adversarial A2C), which significantly improves the efficiency of dialogue policy learning in task-completion dialogue systems. Inspired by generative adversarial networks (GAN), we train a discriminator to differentiate responses/actions generated by dialogue agents from responses/actions by experts. Then, we incorporate the discriminator as another critic into the advantage actor-critic (A2C) framework, to encourage the dialogue agent to explore state-action within the regions where the agent takes actions similar to those of the experts. Experimental results in a movie-ticket booking domain show that the proposed Adversarial A2C can accelerate policy exploration efficiently.",
"title": ""
}
] |
scidocsrr
|
3737e986774433d1b0de441a39607086
|
Money Laundering Detection using Synthetic Data
|
[
{
"docid": "e67dc912381ebbae34d16aad0d3e7d92",
"text": "In this paper, we study the problem of applying data mining to facilitate the investigation of money laundering crimes (MLCs). We have identified a new paradigm of problems --- that of automatic community generation based on uni-party data, the data in which there is no direct or explicit link information available. Consequently, we have proposed a new methodology for Link Discovery based on Correlation Analysis (LDCA). We have used MLC group model generation as an exemplary application of this problem paradigm, and have focused on this application to develop a specific method of automatic MLC group model generation based on timeline analysis using the LDCA methodology, called CORAL. A prototype of CORAL method has been implemented, and preliminary testing and evaluations based on a real MLC case data are reported. The contributions of this work are: (1) identification of the uni-party data community generation problem paradigm, (2) proposal of a new methodology LDCA to solve for problems in this paradigm, (3) formulation of the MLC group model generation problem as an example of this paradigm, (4) application of the LDCA methodology in developing a specific solution (CORAL) to the MLC group model generation problem, and (5) development, evaluation, and testing of the CORAL prototype in a real MLC case data.",
"title": ""
},
{
"docid": "51eb8e36ffbf5854b12859602f7554ef",
"text": "Fraud is increasing dramatically with the expansion of modern technology and the global superhighways of communication, resulting in the loss of billions of dollars worldwide each year. Although prevention technologies are the best way to reduce fraud, fraudsters are adaptive and, given time, will usually find ways to circumvent such measures. Methodologies for the detection of fraud are essential if we are to catch fraudsters once fraud prevention has failed. Statistics and machine learning provide effective technologies for fraud detection and have been applied successfully to detect activities such as money laundering, e-commerce credit card fraud, telecommunications fraud and computer intrusion, to name but a few. We describe the tools available for statistical fraud detection and the areas in which fraud detection technologies are most used.",
"title": ""
}
] |
[
{
"docid": "f069501007d4c9d1ada190353d01c7e9",
"text": "A discrimination theory of selective perception was used to predict that a given trait would be spontaneously salient in a person's self-concept to the exten that this trait was distinctive for the person within her or his social groups. Sixth-grade students' general and physical spontaneous self-concepts were elicited in their classroom settings. The distinctiveness within the classroom of each student's characteristics on each of a variety of dimensions was determined, and it was found that in a majority of cases the dimension was significantly more salient in the spontaneous self-concepts of those students whose characteristic on thedimension was more distinctive. Also reported are incidental findings which include a description of the contents of spontaneous self-comcepts as well as determinants of their length and of the spontaneous mention of one's sex as part of one's self-concept.",
"title": ""
},
{
"docid": "87eab42827061426dfc9b335530e7037",
"text": "OBJECTIVES\nHealth behavior theories focus on the role of conscious, reflective factors (e.g., behavioral intentions, risk perceptions) in predicting and changing behavior. Dual-process models, on the other hand, propose that health actions are guided not only by a conscious, reflective, rule-based system but also by a nonconscious, impulsive, associative system. This article argues that research on health decisions, actions, and outcomes will be enriched by greater consideration of nonconscious processes.\n\n\nMETHODS\nA narrative review is presented that delineates research on implicit cognition, implicit affect, and implicit motivation. In each case, we describe the key ideas, how they have been taken up in health psychology, and the possibilities for behavior change interventions, before outlining directions that might profitably be taken in future research.\n\n\nRESULTS\nCorrelational research on implicit cognitive and affective processes (attentional bias and implicit attitudes) has recently been supplemented by intervention studies using implementation intentions and practice-based training that show promising effects. Studies of implicit motivation (health goal priming) have also observed encouraging findings. There is considerable scope for further investigations of implicit affect control, unconscious thought, and the automatization of striving for health goals.\n\n\nCONCLUSION\nResearch on nonconscious processes holds significant potential that can and should be developed by health psychologists. Consideration of impulsive as well as reflective processes will engender new targets for intervention and should ultimately enhance the effectiveness of behavior change efforts.",
"title": ""
},
{
"docid": "ea49e4a74c165f3819e24d48df4777f2",
"text": "BACKGROUND\nThe fatty tissue of the face is divided into compartments. The structures delimiting these compartments help shape the face, are involved in aging, and are encountered during surgical procedures.\n\n\nOBJECTIVE\nTo study the border between the lateral-temporal and the middle cheek fat compartments of the face.\n\n\nMETHODS & MATERIALS\nWe studied 40 human cadaver heads with gross dissections and macroscopic and histological sections. Gelatin was injected into the subcutaneous tissues of 35 heads.\n\n\nRESULTS\nA sheet of connective tissue, comparable to a septum, was consistently found between the lateral-temporal and the middle compartments. We call this structure the septum subcutaneum parotideomassetericum.\n\n\nCONCLUSION\nThere is a distinct septum between the lateral-temporal and the middle fat compartments of the face.",
"title": ""
},
{
"docid": "3f9bcd99eac46264ee0920ddcc866d33",
"text": "The advent of easy to use blogging tools is increasing the number of bloggers leading to more diversity in the quality blogspace. The blog search technologies that help users to find “good” blogs are thus more and more important. This paper proposes a new algorithm called “EigenRumor” that scores each blog entry by weighting the hub and authority scores of the bloggers based on eigenvector calculations. This algorithm enables a higher score to be assigned to the blog entries submitted by a good blogger but not yet linked to by any other blogs based on acceptance of the blogger's prior work. General Terms Algorithms, Management, Experimentation",
"title": ""
},
{
"docid": "b527ade4819e314a723789de58280724",
"text": "Securing collaborative filtering systems from malicious attack has become an important issue with increasing popularity of recommender Systems. Since recommender systems are entirely based on the input provided by the users or customers, they tend to become highly vulnerable to outside attacks. Prior research has shown that attacks can significantly affect the robustness of the systems. To prevent such attacks, researchers proposed several unsupervised detection mechanisms. While these approaches produce satisfactory results in detecting some well studied attacks, they are not suitable for all types of attacks studied recently. In this paper, we show that the unsupervised clustering can be used effectively for attack detection by computing detection attributes modeled on basic descriptive statistics. We performed extensive experiments and discussed different approaches regarding their performances. Our experimental results showed that attribute-based unsupervised clustering algorithm can detect spam users with a high degree of accuracy and fewer misclassified genuine users regardless of attack strategies.",
"title": ""
},
{
"docid": "86177ff4fbc089fde87d1acd8452d322",
"text": "Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life.",
"title": ""
},
{
"docid": "39fc05dfc0faeb47728b31b6053c040a",
"text": "Attempted and completed self-enucleation, or removal of one's own eyes, is a rare but devastating form of self-mutilation behavior. It is often associated with psychiatric disorders, particularly schizophrenia, substance induced psychosis, and bipolar disorder. We report a case of a patient with a history of bipolar disorder who gouged his eyes bilaterally as an attempt to self-enucleate himself. On presentation, the patient was manic with both psychotic features of hyperreligous delusions and command auditory hallucinations of God telling him to take his eyes out. On presentation, the patient had no light perception vision in both eyes and his exam displayed severe proptosis, extensive conjunctival lacerations, and visibly avulsed extraocular muscles on the right side. An emergency computed tomography scan of the orbits revealed small and irregular globes, air within the orbits, and intraocular hemorrhage. He was taken to the operating room for surgical repair of his injuries. Attempted and completed self-enucleation is most commonly associated with schizophrenia and substance induced psychosis, but can also present in patients with bipolar disorder. Other less commonly associated disorders include obsessive-compulsive disorder, depression, mental retardation, neurosyphilis, Lesch-Nyhan syndrome, and structural brain lesions.",
"title": ""
},
{
"docid": "6f84dbe3cf41906b66a7b1d9fe8b0ff1",
"text": "We show that the credit quality of corporate debt issuers deteriorates during credit booms, and that this deterioration forecasts low excess returns to corporate bondholders. The key insight is that changes in the pricing of credit risk disproportionately affect the financing costs faced by low quality firms, so the debt issuance of low quality firms is particularly useful for forecasting bond returns. We show that a significant decline in issuer quality is a more reliable signal of credit market overheating than rapid aggregate credit growth. We use these findings to investigate the forces driving time-variation in expected corporate bond returns. For helpful suggestions, we are grateful to Malcolm Baker, Effi Benmelech, Dan Bergstresser, John Campbell, Sergey Chernenko, Lauren Cohen, Ian Dew-Becker, Martin Fridson, Victoria Ivashina, Chris Malloy, Andrew Metrick, Jun Pan, Erik Stafford, Luis Viceira, Jeff Wurgler, seminar participants at the 2012 AEA Annual Meetings, Columbia GSB, Dartmouth Tuck, Federal Reserve Bank of New York, Federal Reserve Board of Governors, Harvard Business School, MIT Sloan, NYU Stern, Ohio State Fisher, University of Chicago Booth, University of Pennsylvania Wharton, Washington University Olin, Yale SOM, and especially David Scharfstein, Andrei Shleifer, Jeremy Stein, and Adi Sunderam. We thank Annette Larson and Morningstar for data on bond returns and Mara Eyllon and William Lacy for research assistance. The Division of Research at the Harvard Business School provided funding.",
"title": ""
},
{
"docid": "c2d0e11e37c8f0252ce77445bf583173",
"text": "This paper describes a method to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving. Based on a parametric body model, we present a robust processing pipeline to infer 3D model shapes including clothed people with 4.5mm reconstruction accuracy. At the core of our approach is the transformation of dynamic body pose into a canonical frame of reference. Our main contribution is a method to transform the silhouette cones corresponding to dynamic human silhouettes to obtain a visual hull in a common reference frame. This enables efficient estimation of a consensus 3D shape, texture and implanted animation skeleton based on a large number of frames. Results on 4 different datasets demonstrate the effectiveness of our approach to produce accurate 3D models. Requiring only an RGB camera, our method enables everyone to create their own fully animatable digital double, e.g., for social VR applications or virtual try-on for online fashion shopping.",
"title": ""
},
{
"docid": "0c4de7ce6574bb22d3cb0b9a7f3d5498",
"text": "Purpose – The purpose of this paper is to attempts to provide further insight into IS adoption by investigating how 12 factors within the technology-organization-environment framework explain smalland medium-sized enterprises’ (SMEs) adoption of enterprise resource planning (ERP) software. Design/methodology/approach – The approach for data collection was questionnaire survey involving executives of SMEs drawn from six fast service enterprises with strong operations in Port Harcourt. The mode of sampling was purposive and snow ball and analysis involves logistic regression test; the likelihood ratios, Hosmer and Lemeshow’s goodness of fit, and Nagelkerke’s R provided the necessary lenses. Findings – The 12 hypothesized relationships were supported with each factor differing in its statistical coefficient and some bearing negative values. ICT infrastructures, technical know-how, perceived compatibility, perceived values, security, and firm’s size were found statistically significant adoption determinants. Although, scope of business operations, trading partners’ readiness, demographic composition, subjective norms, external supports, and competitive pressures were equally critical but their negative coefficients suggest they pose less of an obstacle to adopters than to non-adopters. Thus, adoption of ERP by SMEs is more driven by technological factors than by organizational and environmental factors. Research limitations/implications – The study is limited by its scope of data collection and phases, therefore extended data are needed to apply the findings to other sectors/industries and to factor in the implementation and post-adoption phases in order to forge a more integrated and holistic adoption framework. Practical implications – The model may be used by IS vendors to make investment decisions, to meet customers’ needs, and to craft informed marketing programs that would appeal to actual and potential adopters and cause them to progress in the customer loyalty ladder. Originality/value – The paper contributes to the growing research on IS innovations’ adoption by using factors within the T-O-E framework to explains SMEs’ adoption of ERP.",
"title": ""
},
{
"docid": "060101cf53a576336e27512431c4c4fc",
"text": "The aim of this chapter is to give an overview of domain adaptation and transfer learning with a specific view to visual applications. After a general motivation, we first position domain adaptation in the more general transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to the new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we review DA methods that go beyond image categorization, such as object detection, image segmentation, video analyses or learning visual attributes. We conclude the chapter with a section where we relate domain adaptation to other machine learning solutions.",
"title": ""
},
{
"docid": "343c1607a4f8df8a8202adb26f9959ed",
"text": "This investigation examined the measurement properties of the Three Domains of Disgust Scale (TDDS). Principal components analysis in Study 1 (n = 206) revealed three factors of Pathogen, Sexual, and Moral Disgust that demonstrated excellent reliability, including test-retest over 12 weeks. Confirmatory factor analyses in Study 2 (n = 406) supported the three factors. Supportive evidence for the validity of the Pathogen and Sexual Disgust subscales was found in Study 1 and Study 2 with strong associations with disgust/contamination and weak associations with negative affect. However, the validity of the Moral Disgust subscale was limited. Study 3 (n = 200) showed that the TDDS subscales differentially related to personality traits. Study 4 (n = 47) provided evidence for the validity of the TDDS subscales in relation to multiple indices of disgust/contamination aversion in a select sample. Study 5 (n = 70) further highlighted limitations of the Moral Disgust subscale given the lack of a theoretically consistent association with moral attitudes. Lastly, Study 6 (n = 178) showed that responses on the Moral Disgust scale were more intense when anger was the response option compared with when disgust was the response option. The implications of these findings for the assessment of disgust are discussed.",
"title": ""
},
{
"docid": "20b02c2afa20a2c2d3a5e9fb4ec5be85",
"text": "Cloning and characterization of the orphan nuclear receptors constitutive androstane receptor (CAR, NR1I3) and pregnane X receptor (PXR, NR1I2) led to major breakthroughs in studying drug-mediated transcriptional induction of drug-metabolizing cytochromes P450 (CYPs). More recently, additional roles for CAR and PXR have been discovered. As examples, these xenosensors are involved in the homeostasis of cholesterol, bile acids, bilirubin, and other endogenous hydrophobic molecules in the liver: CAR and PXR thus form an intricate regulatory network with other members of the nuclear receptor superfamily, foremost the cholesterol-sensing liver X receptor (LXR, NR1H2/3) and the bile-acid-activated farnesoid X receptor (FXR, NR1H4). In this review, functional interactions between these nuclear receptors as well as the consequences on physiology and pathophysiology of the liver are discussed.",
"title": ""
},
{
"docid": "a0850b5f8b2d994b50bb912d6fca3dfb",
"text": "In this paper we describe the development of an accurate, smallfootprint, large vocabulary speech recognizer for mobile devices. To achieve the best recognition accuracy, state-of-the-art deep neural networks (DNNs) are adopted as acoustic models. A variety of speedup techniques for DNN score computation are used to enable real-time operation on mobile devices. To reduce the memory and disk usage, on-the-fly language model (LM) rescoring is performed with a compressed n-gram LM. We were able to build an accurate and compact system that runs well below real-time on a Nexus 4 Android phone.",
"title": ""
},
{
"docid": "bf14f996f9013351aca1e9935157c0e3",
"text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.",
"title": ""
},
{
"docid": "fb87648c3bb77b1d9b162a8e9dbc5e86",
"text": "With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"title": ""
},
{
"docid": "3d6744ae85a9aa07d8c4cb68c79290c7",
"text": "Control over the motional degrees of freedom of atoms, ions, and molecules in a field-free environment enables unrivalled measurement accuracies but has yet to be applied to highly charged ions (HCIs), which are of particular interest to future atomic clock designs and searches for physics beyond the Standard Model. Here, we report on the Coulomb crystallization of HCIs (specifically 40Ar13+) produced in an electron beam ion trap and retrapped in a cryogenic linear radiofrequency trap by means of sympathetic motional cooling through Coulomb interaction with a directly laser-cooled ensemble of Be+ ions. We also demonstrate cooling of a single Ar13+ ion by a single Be+ ion—the prerequisite for quantum logic spectroscopy with a potential 10−19 accuracy level. Achieving a seven-orders-of-magnitude decrease in HCI temperature starting at megakelvin down to the millikelvin range removes the major obstacle for HCI investigation with high-precision laser spectroscopy.",
"title": ""
},
{
"docid": "a33aa33a2ae6efe5ca43948e8ef3043e",
"text": "In this paper, we describe COCA -- Computation Offload to Clouds using AOP (aspect-oriented programming). COCA is a programming framework that allows smart phones application developers to offload part of the computation to servers in the cloud easily. COCA works at the source level. By harnessing the power of AOP, \\name inserts appropriate offloading code into the source code of the target application based on the result of static and dynamic profiling. As a proof of concept, we integrate \\name into the Android development environment and fully automate the new build process, making application programming and software maintenance easier. With COCA, mobile applications can now automatically offload part of the computation to the cloud, achieving better performance and longer battery life. Smart phones such as iPhone and Android phones can now easily leverage the immense computing power of the cloud to achieve tasks that were considered difficult before, such as having a more complicated artificial-intelligence engine.",
"title": ""
}
] |
scidocsrr
|
f6f69455167c9a7c1df696807904885f
|
AN IMAGE PROCESSING AND NEURAL NETWORK BASED APPROACH FOR DETECTION AND CLASSIFICATION OF PLANT LEAF DISEASES
|
[
{
"docid": "9aa3a9b8fb22ba929146298386ca9e57",
"text": "Since current grading of plant diseases is mainly based on eyeballing, a new method is developed based on computer image processing. All influencing factors existed in the process of image segmentation was analyzed and leaf region was segmented by using Otsu method. In the HSI color system, H component was chosen to segment disease spot to reduce the disturbance of illumination changes and the vein. Then, disease spot regions were segmented by using Sobel operator to examine disease spot edges. Finally, plant diseases are graded by calculating the quotient of disease spot and leaf areas. Researches indicate that this method to grade plant leaf spot diseases is fast and accurate.",
"title": ""
}
] |
[
{
"docid": "f92f0a3d46eaf14e478a41f87b8ad369",
"text": "The agricultural productivity of India is gradually declining due to destruction of crops by various natural calamities and the crop rotation process being affected by irregular climate patterns. Also, the interest and efforts put by farmers lessen as they grow old which forces them to sell their agricultural lands, which automatically affects the production of agricultural crops and dairy products. This paper mainly focuses on the ways by which we can protect the crops during an unavoidable natural disaster and implement technology induced smart agro-environment, which can help the farmer manage large fields with less effort. Three common issues faced during agricultural practice are shearing furrows in case of excess rain or flood, manual watering of plants and security against animal grazing. This paper provides a solution for these problems by helping farmer monitor and control various activities through his mobile via GSM and DTMF technology in which data is transmitted from various sensors placed in the agricultural field to the controller and the status of the agricultural parameters are notified to the farmer using which he can take decisions accordingly. The main advantage of this system is that it is semi-automated i.e. the decision is made by the farmer instead of fully automated decision that results in precision agriculture. It also overcomes the existing traditional practices that require high money investment, energy, labour and time.",
"title": ""
},
{
"docid": "4dbd25b0c93b702d93513601b41553b0",
"text": "The last decade has seen a growing interest in air quality monitoring using networks of wireless low-cost sensor platforms. One of the unifying characteristics of chemical sensors typically used in real-world deployments is their slow response time. While the impact of sensor dynamics can largely be neglected when considering static scenarios, in mobile applications chemical sensor measurements should not be considered as point measurements (i.e. instantaneous in space and time). In this paper, we study the impact of sensor dynamics on measurement accuracy and locality through systematic experiments in the controlled environment of a wind tunnel. We then propose two methods for dealing with this problem: (i) reducing the effect of the sensor’s slow dynamics by using an open active sampler, and (ii) estimating the underlying true signal using a sensor model and a deconvolution technique. We consider two performance metrics for evaluation: localization accuracy of specific field features and root mean squared error in field estimation. Finally, we show that the deconvolution technique results in consistent performance improvement for all the considered scenarios, and for both metrics, while the active sniffer design considered provides an advantage only for feature localization, particularly for the highest sensor movement speed.",
"title": ""
},
{
"docid": "b6f9d5015fddbf92ab44ae6ce2f7d613",
"text": "Emojis are small images that are commonly included in social media text messages. The combination of visual and textual content in the same message builds up a modern way of communication, that automatic systems are not used to deal with. In this paper we extend recent advances in emoji prediction by putting forward a multimodal approach that is able to predict emojis in Instagram posts. Instagram posts are composed of pictures together with texts which sometimes include emojis. We show that these emojis can be predicted by using the text, but also using the picture. Our main finding is that incorporating the two synergistic modalities, in a combined model, improves accuracy in an emoji prediction task. This result demonstrates that these two modalities (text and images) encode different information on the use of emojis and therefore can complement each other.",
"title": ""
},
{
"docid": "a7656eb3b0443ef88ef4bb134a4f3a55",
"text": "A simple methodology is described – the multi-turbine power curve approach – a methodology to generate a qualified estimate of the time series of the aggregated power generation from planned wind turbine units distributed in an area where limited wind time series are available. This is often the situation in a planning phase where you want to simulate planned expansions in a power system with wind power. The methodology is described in a stepby-step guideline.",
"title": ""
},
{
"docid": "4af7fe3bbfcd5874f1e0607ceeda97ab",
"text": "Personality psychology addresses views of human nature and individual differences. Biological and goal-based views of human nature provide an especially useful basis for construing coping; the five-factor model of traits adds a useful set of individual differences. Coping-responses to adversity and to the distress that results-is categorized in many ways. Meta-analyses link optimism, extraversion, conscientiousness, and openness to more engagement coping; neuroticism to more disengagement coping; and optimism, conscientiousness, and agreeableness to less disengagement coping. Relations of traits to specific coping responses reveal a more nuanced picture. Several moderators of these associations also emerge: age, stressor severity, and temporal proximity between the coping activity and the coping report. Personality and coping play both independent and interactive roles in influencing physical and mental health. Recommendations are presented for ways future research can expand on the growing understanding of how personality and coping shape adjustment to stress.",
"title": ""
},
{
"docid": "f6c874435978db83361f62bfe70a6681",
"text": "“Microbiology Topics” discusses various topics in microbiology of practical use in validation and compliance. We intend this column to be a useful resource for daily work applications. Reader comments, questions, and suggestions are needed to help us fulfill our objective for this column. Please send your comments and suggestions to column coordinator Scott Sutton at scott. [email protected] or journal managing editor Susan Haigney at [email protected].",
"title": ""
},
{
"docid": "adae03c768e3bc72f325075cf22ef7b1",
"text": "The vergence-accommodation conflict (VAC) remains a major problem in head-mounted displays for virtual and augmented reality (VR and AR). In this review, I discuss why this problem is pivotal for nearby tasks in VR and AR, present a comprehensive taxonomy of potential solutions, address advantages and shortfalls of each design, and cover various ways to better evaluate the solutions. The review describes how VAC is addressed in monocular, stereoscopic, and multiscopic HMDs, including retinal scanning and accommodation-free displays. Eye-tracking-based approaches that do not provide natural focal cues-gaze-guided blur and dynamic stereoscopy-are also covered. Promising future research directions in this area are identified.",
"title": ""
},
{
"docid": "cbc2b592efc227a5c6308edfbca51bd6",
"text": "The rapidly growing presence of Internet of Things (IoT) devices is becoming a continuously alluring playground for malicious actors who try to harness their vast numbers and diverse locations. One of their primary goals is to assemble botnets that can serve their nefarious purposes, ranging from Denial of Service (DoS) to spam and advertisement fraud. The most recent example that highlights the severity of the problem is the Mirai family of malware, which is accountable for a plethora of massive DDoS attacks of unprecedented volume and diversity. The aim of this paper is to offer a comprehensive state-of-the-art review of the IoT botnet landscape and the underlying reasons of its success with a particular focus on Mirai and major similar worms. To this end, we provide extensive details on the internal workings of IoT malware, examine their interrelationships, and elaborate on the possible strategies for defending against them.",
"title": ""
},
{
"docid": "9ea0612f646228a3da41b7f55c23e825",
"text": "It is shown that many published models for the Stanford Question Answering Dataset (Rajpurkar et al., 2016) lack robustness, suffering an over 50% decrease in F1 score during adversarial evaluation based on the AddSent (Jia and Liang, 2017) algorithm. It has also been shown that retraining models on data generated by AddSent has limited effect on their robustness. We propose a novel alternative adversary-generation algorithm, AddSentDiverse, that significantly increases the variance within the adversarial training data by providing effective examples that punish the model for making certain superficial assumptions. Further, in order to improve robustness to AddSent’s semantic perturbations (e.g., antonyms), we jointly improve the model’s semantic-relationship learning capabilities in addition to our AddSentDiversebased adversarial training data augmentation. With these additions, we show that we can make a state-of-the-art model significantly more robust, achieving a 36.5% increase in F1 score under many different types of adversarial evaluation while maintaining performance on the regular SQuAD task.",
"title": ""
},
{
"docid": "dc1cfdda40b23849f11187ce890c8f8b",
"text": "Controlled sharing of information is needed and desirable for many applications and is supported in operating systems by access control mechanisms. This paper shows how to extend programming languages to provide controlled sharing. The extension permits expression of access constraints on shared data. Access constraints can apply both to simple objects, and to objects that are components of larger objects, such as bank account records in a bank's data base. The constraints are stated declaratively, and can be enforced by static checking similar to type checking. The approach can be used to extend any strongly-typed language, but is particularly suitable for extending languages that support the notion of abstract data types.",
"title": ""
},
{
"docid": "0a4749ecc23cb04f494a987268704f0f",
"text": "With the growing demand for digital information in health care, the electronic medical record (EMR) represents the foundation of health information technology. It is essential, however, in an industry still largely dominated by paper-based records, that such systems be accepted and used. This research evaluates registered nurses’, certified nurse practitioners and physician assistants’ acceptance of EMR’s as a means to predict, define and enhance use. The research utilizes the Unified Theory of Acceptance and Use of Technology (UTAUT) as the theoretical model, along with the Partial Least Square (PLS) analysis to estimate the variance. Overall, the findings indicate that UTAUT is able to provide a reasonable assessment of health care professionals’ acceptance of EMR’s with social influence a significant determinant of intention and use.",
"title": ""
},
{
"docid": "b06fd59d5acdf6dd0b896a62f5d8b123",
"text": "BACKGROUND\nHippocampal volume reduction has been reported inconsistently in people with major depression.\n\n\nAIMS\nTo evaluate the interrelationships between hippocampal volumes, memory and key clinical, vascular and genetic risk factors.\n\n\nMETHOD\nTotals of 66 people with depression and 20 control participants underwent magnetic resonance imaging and clinical assessment. Measures of depression severity, psychomotor retardation, verbal and visual memory and vascular and specific genetic risk factors were collected.\n\n\nRESULTS\nReduced hippocampal volumes occurred in older people with depression, those with both early-onset and late-onset disorders and those with the melancholic subtype. Reduced hippocampal volumes were associated with deficits in visual and verbal memory performance.\n\n\nCONCLUSIONS\nAlthough reduced hippocampal volumes are most pronounced in late-onset depression, older people with early-onset disorders also display volume changes and memory loss. No clear vascular or genetic risk factors explain these findings. Hippocampal volume changes may explain how depression emerges as a risk factor to dementia.",
"title": ""
},
{
"docid": "fe318971645b171929188b091425a8ac",
"text": "Metal interconnections are expected to become the limiting factor for the performance of electronic systems as transistors continue to shrink in size. Replacing them by optical interconnections, at different levels ranging from rack-to-rack down to chip-to-chip and intra-chip interconnections, could provide the low power dissipation, low latencies and high bandwidths that are needed. The implementation of optical interconnections relies on the development of micro-optical devices that are integrated with the microelectronics on chips. Recent demonstrations of silicon low-loss waveguides, light emitters, amplifiers and lasers approach this goal, but a small silicon electro-optic modulator with a size small enough for chip-scale integration has not yet been demonstrated. Here we experimentally demonstrate a high-speed electro-optical modulator in compact silicon structures. The modulator is based on a resonant light-confining structure that enhances the sensitivity of light to small changes in refractive index of the silicon and also enables high-speed operation. The modulator is 12 micrometres in diameter, three orders of magnitude smaller than previously demonstrated. Electro-optic modulators are one of the most critical components in optoelectronic integration, and decreasing their size may enable novel chip architectures.",
"title": ""
},
{
"docid": "ed8ee467e7f40d6ba35cc6f8329ca681",
"text": "This paper proposes an architecture for Software Defined Optical Transport Networks. The SDN Controller includes a network abstraction layer allowing the implementation of cognitive controls and policies for autonomic operation, based on global network view. Additionally, the controller implements a virtualized GMPLS control plane, offloading and simplifying the network elements, while unlocking the implementation of new services such as optical VPNs, optical network slicing, and keeping standard OIF interfaces, such as UNI and NNI. The concepts have been implemented and validated in a real testbed network formed by five DWDM nodes equipped with flexgrid WSS ROADMs.",
"title": ""
},
{
"docid": "56b706edc6d1b6a2ff64770cb3f79c2e",
"text": "The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, computer Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. However, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo methods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper, we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.",
"title": ""
},
{
"docid": "aef25b8bc64bb624fb22ce39ad7cad89",
"text": "Depth estimation and semantic segmentation are two fundamental problems in image understanding. While the two tasks are strongly correlated and mutually beneficial, they are usually solved separately or sequentially. Motivated by the complementary properties of the two tasks, we propose a unified framework for joint depth and semantic prediction. Given an image, we first use a trained Convolutional Neural Network (CNN) to jointly predict a global layout composed of pixel-wise depth values and semantic labels. By allowing for interactions between the depth and semantic information, the joint network provides more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction [6]. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction under the guidance of global layout. Utilizing the pixel-wise global prediction and region-wise local prediction, we formulate the inference problem in a two-layer Hierarchical Conditional Random Field (HCRF) to produce the final depth and semantic map. As demonstrated in the experiments, our approach effectively leverages the advantages of both tasks and provides the state-of-the-art results.",
"title": ""
},
{
"docid": "dfd16d21384cf722866c22d30b3f6a18",
"text": "The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically N<;20) and usually limited to a single type of lung sound. Larger research studies have also been impeded by the challenge of labeling large volumes of data, which is extremely labor-intensive. In this paper, we present the development of a semi-supervised deep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.",
"title": ""
},
{
"docid": "5447d3fe8ed886a8792a3d8d504eaf44",
"text": "Glucose-responsive delivery of insulin mimicking the function of pancreatic β-cells to achieve meticulous control of blood glucose (BG) would revolutionize diabetes care. Here the authors report the development of a new glucose-responsive insulin delivery system based on the potential interaction between the glucose derivative-modified insulin (Glc-Insulin) and glucose transporters on erythrocytes (or red blood cells, RBCs) membrane. After being conjugated with the glucosamine, insulin can efficiently bind to RBC membranes. The binding is reversible in the setting of hyperglycemia, resulting in fast release of insulin and subsequent drop of BG level in vivo. The delivery vehicle can be further simplified utilizing injectable polymeric nanocarriers coated with RBC membrane and loaded with Glc-Insulin. The described work is the first demonstration of utilizing RBC membrane to achieve smart insulin delivery with fast responsiveness.",
"title": ""
},
{
"docid": "4d3b988de22e4630e1b1eff9e0d4551b",
"text": "In this chapter we present a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up. We illustrate our methodology by an example from a case study on skills management. The methodology serves as a scaffold for Part B “Ontology Engineering” of the handbook. It shows where more specific concerns of ontology engineering find their place and how they are related in the overall process.",
"title": ""
},
{
"docid": "6ae289d7da3e923c1288f39fd7a162f6",
"text": "The usage of digital evidence from electronic devices has been rapidly expanding within litigation, and along with this increased usage, the reliance upon forensic computer examiners to acquire, analyze, and report upon this evidence is also rapidly growing. This growing demand for forensic computer examiners raises questions concerning the selection of individuals qualified to perform this work. While courts have mechanisms for qualifying witnesses that provide testimony based on scientific data, such as digital data, the qualifying criteria covers a wide variety of characteristics including, education, experience, training, professional certifications, or other special skills. In this study, we compare task performance responses from forensic computer examiners with an expert review panel and measure the relationship with the characteristics of the examiners to their quality responses. The results of this analysis provide insight into identifying forensic computer examiners that provide high-quality responses.",
"title": ""
}
] |
scidocsrr
|
05144de0a5d1edeb6be8976b6f032506
|
Design of Microwave Filters
|
[
{
"docid": "11f75576e646e9a74e70dcb1a61bdd90",
"text": "Previous designs for CQ filters have required matrix rotation operations on the coupling matrix of the canonic form of the cross-coupled filters. This is a rather awkward and not entirely satisfactory process since the theory is not general, requiring the application of equations specific to each order of filter, and in fact has been developed only as far as even order 10. A new direct CQ synthesis has now been discovered having no such limitations.<<ETX>>",
"title": ""
}
] |
[
{
"docid": "c32af7ce60d3d6eaa09a2876ba5469d3",
"text": "ID: 2423 Y. M. S. Al-Wesabi, Avishek Choudhury, Daehan Won Binghamton University, USA",
"title": ""
},
{
"docid": "7321a25cb98e25f3773447f66f0d176e",
"text": "The biolinguistic perspective regards the language faculty as an organ of the body, along with other cognitive systems. Adopting it, we expect to find three factors that interact to determine (I-) languages attained: genetic endowment (the topic of Universal Grammar), experience, and principles that are language- or even organism-independent. Research has naturally focused on I-languages and UG, the problems of descriptive and explanatory adequacy. The Principles-and-Parameters approach opened the possibility for serious investigation of the third factor, and the attempt to account for properties of language in terms of general considerations of computational efficiency, eliminating some of the technology postulated as specific to language and providing more principled explanation of linguistic phenomena",
"title": ""
},
{
"docid": "e460b586a78b334f1faaab0ad77a2a82",
"text": "This paper introduces an allocation and scheduling algorithm that efficiently handles conditional execution in multi-rate embedded system. Control dependencies are introduced into the task graph model. We propose a mutual exclusion detection algorithm that helps the scheduling algorithm to exploit the resource sharing. Allocation and scheduling are performed simultaneously to take advantage of the resource sharing among those mutual exclusive tasks. The algorithm is fast and efficient,and so is suitable to be used in the inner loop of our hardware/software co-synthesis framework which must call the scheduling routine many times.",
"title": ""
},
{
"docid": "0afe679d5b022cc31a3ce69b967f8d77",
"text": "Cyber-crime has reached unprecedented proportions in this day and age. In addition, the internet has created a world with seemingly no barriers while making a countless number of tools available to the cyber-criminal. In light of this, Computer Forensic Specialists employ state-of-the-art tools and methodologies in the extraction and analysis of data from storage devices used at the digital crime scene. The focus of this paper is to conduct an investigation into some of these Forensic tools eg.Encase®. This investigation will address commonalities across the Forensic tools, their essential differences and ultimately point out what features need to be improved in these tools to allow for effective autopsies of storage devices.",
"title": ""
},
{
"docid": "e104544e8ac61ea6d77415df1deeaf81",
"text": "This thesis is devoted to marker-less 3D human motion tracking in calibrated and synchronized multicamera systems. Pose estimation is based on a 3D model, which is transformed into the image plane and then rendered. Owing to elaborated techniques the tracking of the full body has been achieved in real-time via dynamic optimization or dynamic Bayesian filtering. The objective function of a particle swarm optimization algorithm and the observation model of a particle filter are based on matching between the rendered 3D models in the required poses and image features representing the extracted person. In such an approach the main part of the computational overload is associated with the rendering of 3D models in hypothetical poses as well as determination of value of objective function. Effective methods for rendering of 3D models in real-time with support of OpenGL as well as parallel methods for determining the objective function on the GPU were developed. The elaborated solutions permit 3D tracking of full body motion in real-time.",
"title": ""
},
{
"docid": "5868ec5c17bf7349166ccd0600cc6b07",
"text": "Secure devices are often subject to attacks and behavioural analysis in order to inject faults on them and/or extract otherwise secret information. Glitch attacks, sudden changes on the power supply rails, are a common technique used to inject faults on electronic devices. Detectors are designed to catch these attacks. As the detectors become more efficient, new glitches that are harder to detect arise. Common glitch detection approaches, such as directly monitoring the power rails, can potentially find it hard to detect fast glitches, as these become harder to differentiate from noise. This paper proposes a design which, instead of monitoring the power rails, monitors the effect of a glitch on a sensitive circuit, hence reducing the risk of detecting noise as glitches.",
"title": ""
},
{
"docid": "83f1cb63b10552a5c14748e3cf2dfc92",
"text": "Recent automotive vision work has focused almost exclusively on processing forward-facing cameras. However, future autonomous vehicles will not be viable without a more comprehensive surround sensing, akin to a human driver, as can be provided by 360◦ panoramic cameras. We present an approach to adapt contemporary deep network architectures developed on conventional rectilinear imagery to work on equirectangular 360◦ panoramic imagery. To address the lack of annotated panoramic automotive datasets availability, we adapt a contemporary automotive dataset, via style and projection transformations, to facilitate the cross-domain retraining of contemporary algorithms for panoramic imagery. Following this approach we retrain and adapt existing architectures to recover scene depth and 3D pose of vehicles from monocular panoramic imagery without any panoramic training labels or calibration parameters. Our approach is evaluated qualitatively on crowd-sourced panoramic images and quantitatively using an automotive environment simulator to provide the first benchmark for such techniques within panoramic imagery.",
"title": ""
},
{
"docid": "c863d82ae2b56202d333ffa5bef5dd59",
"text": "We present an algorithm for finding landmarks along a manifold. These landmarks provide a small set of locations spaced out along the manifold such that they capture the low-dimensional nonlinear structure of the data embedded in the high-dimensional space. The approach does not select points directly from the dataset, but instead we optimize each landmark by moving along the continuous manifold space (as approximated by the data) according to the gradient of an objective function. We borrow ideas from active learning with Gaussian processes to define the objective, which has the property that a new landmark is “repelled” by those currently selected, allowing for exploration of the manifold. We derive a stochastic algorithm for learning with large datasets and show results on several datasets, including the Million Song Dataset and articles from the New York Times.",
"title": ""
},
{
"docid": "db316e47f26a33e48d5f62121b07aa10",
"text": "Storing, querying, and analyzing trajectories is becoming increasingly important, as the availability and volumes of trajectory data increases. One important class of trajectory analysis is computing trajectory similarity. This paper introduces and compares four of the most common measures of trajectory similarity: longest common subsequence (LCSS), Fréchet distance, dynamic time warping (DTW), and edit distance. These four measures have been implemented in a new open source R package, freely available on CRAN [19]. The paper highlights some of the differences between these four similarity measures, using real trajectory data, in addition to indicating some of the important emerging applications for measurement of trajectory similarity.",
"title": ""
},
{
"docid": "bd72a921c7bfa4a7db8ca9dd8715fa45",
"text": "Augmented Reality (AR) is growing rapidly and becoming a mature and robust technology, which combines virtual information with the real environment and real-time performance. It is important to ensure the acceptance and success of augmented reality systems. With the growth of elderly users, evidence shows potential trends for AR systems to support the elderly, including transport, ageing in place, entertainment and training. However, there is a lack of research to provide the theoretical framework or AR design principles to support designers when developing suitable AR applications for specific populations (e.g. older people). In my PhD thesis, I will focus on the possibility of developing and applying AR design principles to support the design of applications that address older people's requirements. In this paper, I first discuss the architecture of augmented reality and identify the relationship between different elements. Secondly, the relevant literature has been reviewed in terms of design challenges of AR and design principles. Thirdly, I formulate the five initial design principles as the fundamental work of my PhD. It is expected that design principles could help AR designers to explore quality design alternatives, which could potentially benefit the ageing population. Fourthly, I identify the AR pillbox as an example to explain how design principles can be applied to AR applications. In terms of the methodology, preparation, refinement and validation are the three main stages to achieve the research goal. Preparation stage aims to generate the preliminary AR design principles and identify the relevant scenarios that might assist the designers to understand the principles and explore the design alternatives. In the stages of refinement, a half-day workshop has been conducted to explore different design issues based on different scenarios and refine the preliminary design principles. After that, a new set of design principles will be formulated. The final stage is to validate the effectiveness of new design principles based on the previous workshop’s feedback.",
"title": ""
},
{
"docid": "2ffb20d66a0d5cb64442c2707b3155c6",
"text": "A botnet is a network of compromised hosts that is under the control of a single, malicious entity, often called the botmaster. We present a system that aims to detect bot-infected machines, independent of any prior information about the command and control channels or propagation vectors, and without requiring multiple infections for correlation. Our system relies on detection models that target the characteristic fact that every bot receives commands from the botmaster to which it responds in a specific way. These detection models are generated automatically from network traffic traces recorded from actual bot instances. We have implemented the proposed approach and demonstrate that it can extract effective detection models for a variety of different bot families. These models are precise in describing the activity of bots and raise very few false positives.",
"title": ""
},
{
"docid": "21f8d5f566efa477597e4bf4a8121b29",
"text": "Silicon epitaxial deposition is a process strongly influenced by wafer temperature behavior, which has to be constantly monitored to avoid the production of defective wafers. However, temperature measurements are not reliable, and the sensors have to be appropriately calibrated with some dedicated procedure. A predictive maintenance (PdM) system is proposed with the aim of predicting process behavior and scheduling control actions on the sensors in advance. Two different prediction techniques have been employed and compared: the Kalman predictor and the particle filter with Gaussian kernel density estimator. The accuracy of the PdM module has been tested on real industrial production datasets.",
"title": ""
},
{
"docid": "bbdd4ffd6797d00c3547626959118b92",
"text": "A vision system was designed to detect multiple lanes on structured highway using an “estimate and detect” scheme. It detected the lane in which the vehicle was driving (the central lane) and estimated the possible position of two adjacent lanes. Then the detection was made based on these estimations. The vehicle was first recognized if it was driving on a straight road or in a curve using its GPS position and the OpenStreetMap digital map. The two cases were processed differently. For straight road, the central lane was detected in the original image using Hough transformation and a simplified perspective transformation was designed to make estimations. In the case of curve path, a complete perspective transformation was performed and the central lane was detected by scanning at each row in the top view image. The system was able to detected lane marks that were not distinct or even obstructed by other vehicles.",
"title": ""
},
{
"docid": "094570518e943330ff8d9e1c714698cb",
"text": "The concept of taking surface wave as an assistant role to obtain wide beams with main directions tilting to endfire is introduced in this paper. Planar Yagi-Uda-like antennas support TE0 surface wave propagation and exhibit endfire radiation patterns. However, when such antennas are printed on a thin grounded substrate, there is no propagation of TE mode and beams tilting to broadside. Benefiting from the advantage that the high impedance surface (HIS) could support TE and/or TM modes propagation, the idea of placing a planar Yagi-Uda-like antenna in close proximity to a HIS to excite unidirectional predominately TE surface wave in HIS is proposed. Power radiated by the feed antenna, in combination with power diffracted by the surface wave determines the total radiation pattern, resulting in the desired pattern. For verification, a compact, low-profile, pattern-reconfigurable parasitic array (having an interstrip spacing of 0.048 λ0) with an integrated DC biasing circuit was fabricated and tested. Good agreement was obtained between measured and simulated results.",
"title": ""
},
{
"docid": "8335faee33da234e733d8f6c95332ec3",
"text": "Myanmar script uses no space between words and syllable segmentation represents a significant process in many NLP tasks such as word segmentation, sorting, line breaking and so on. In this study, a rulebased approach of syllable segmentation algorithm for Myanmar text is proposed. Segmentation rules were created based on the syllable structure of Myanmar script and a syllable segmentation algorithm was designed based on the created rules. A segmentation program was developed to evaluate the algorithm. A training corpus containing 32,283 Myanmar syllables was tested in the program and the experimental results show an accuracy rate of 99.96% for segmentation.",
"title": ""
},
{
"docid": "ef08ef786fd759b33a7d323c69be19db",
"text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.",
"title": ""
},
{
"docid": "98019be2037de404409d24618fadaf22",
"text": "Fournier's gangrene is a condition marked by fulminant polymicrobial necrotizing fasciitis of the urogenital and perineal areas. We present a patient with Fournier's gangrene and describe the physical examination and bedside sonographic findings. These findings can assist in the evaluation of patients with concerning symptoms so there can be timely administration of antibiotics and specialist consultation when necessary.",
"title": ""
},
{
"docid": "1643198760d175b701cf21553ce0f183",
"text": "BACKGROUND\nThis in vivo study evaluated the difference of two well-known intraoral scanners used in dentistry, namely iTero (Align Technology) and TRIOS (3Shape).\n\n\nMETHODS\nThirty-two participants underwent intraoral scans with TRIOS and iTero scanners, as well as conventional alginate impressions. The scans obtained with the two intraoral scanners were compared with each other and were also compared with the corresponding model scans by means of three-dimensional surface analysis. The average differences between the two intraoral scans on the surfaces were evaluated by color-mapping. The average differences in the three-dimensional direction between each intraoral scans and its corresponding model scan were calculated at all points on the surfaces.\n\n\nRESULTS\nThe average differences between the two intraoral scanners were 0.057 mm at the maxilla and 0.069 mm at the mandible. Color histograms showed that local deviations between the two scanners occurred in the posterior area. As for difference in the three-dimensional direction, there was no statistically significant difference between two scanners.\n\n\nCONCLUSIONS\nAlthough there were some deviations in visible inspection, there was no statistical significance between the two intraoral scanners.",
"title": ""
},
{
"docid": "488c7437a32daec6fbad12e07bb31f4c",
"text": "Studying characters plays a vital role in computationally representing and interpreting narratives. Unlike previous work, which has focused on inferring character roles, we focus on the problem of modeling their relationships. Rather than assuming a fixed relationship for a character pair, we hypothesize that relationships temporally evolve with the progress of the narrative, and formulate the problem of relationship modeling as a structured prediction problem. We propose a semisupervised framework to learn relationship sequences from fully as well as partially labeled data. We present a Markovian model capable of accumulating historical beliefs about the relationship and status changes. We use a set of rich linguistic and semantically motivated features that incorporate world knowledge to investigate the textual content of narrative. We empirically demonstrate that such a framework outperforms competitive baselines.",
"title": ""
},
{
"docid": "f10294ed332670587cf9c100f2d75428",
"text": "In ancient times, people exchanged their goods and services to obtain what they needed (such as clothes and tools) from other people. This system of bartering compensated for the lack of currency. People offered goods/services and received in kind other goods/services. Now, despite the existence of multiple currencies and the progress of humanity from the Stone Age to the Byte Age, people still barter but in a different way. Mainly, people use money to pay for the goods they purchase and the services they obtain.",
"title": ""
}
] |
scidocsrr
|
520f1dca620375534a26ec5941d88d95
|
A Lightweight Simulator for Autonomous Driving Motion Planning Development
|
[
{
"docid": "70710daefe747da7d341577947b6b8ff",
"text": "This paper describes an automated lane centering/changing control algorithm that was developed at General Motors Research and Development. Over the past few decades, there have been numerous studies in the autonomous vehicle motion control. These studies typically focused on improving the control accuracy of the autonomous driving vehicles. In addition to the control accuracy, driver/passenger comfort is also an important performance measure of the system. As an extension of authors' prior study, this paper further considers vehicle motion control to provide driver/passenger comfort based on the adjustment of the lane change maneuvering time in various traffic situations. While defining the driver/passenger comfort level is a human factor study topic, this paper proposes a framework to integrate the motion smoothness into the existing lane centering/changing control problem. The proposed algorithm is capable of providing smooth and aggressive lane change maneuvers according to traffic situation and driver preference. Several simulation results as well as on-road vehicle test results confirm the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "496bdd85a0aebb64d2f2b36c2050eb3a",
"text": "This research derives, implements, tunes and compares selected path tracking methods for controlling a car-like robot along a predetermined path. The scope includes commonly used m ethods found in practice as well as some theoretical methods found in various literature from other areas of rese arch. This work reviews literature and identifies important path tracking models and control algorithms from the vast back ground and resources. This paper augments the literature with a comprehensive collection of important path tracking idea s, a guide to their implementations and, most importantly, an independent and realistic comparison of the perfor mance of these various approaches. This document does not catalog all of the work in vehicle modeling and control; only a selection that is perceived to be important ideas when considering practical system identification, ease of implementation/tuning and computational efficiency. There are several other methods that meet this criteria, ho wever they are deemed similar to one or more of the approaches presented and are not included. The performance r esults, analysis and comparison of tracking methods ultimately reveal that none of the approaches work well in all applications a nd that they have some complementary characteristics. These complementary characteristics lead to an idea that a combination of methods may be useful for more general applications. Additionally, applications for which the methods in this paper do not provide adequate solutions are identified.",
"title": ""
}
] |
[
{
"docid": "4dda701b0bf796f044abf136af7b0a9c",
"text": "Legacy substation automation protocols and architectures typically provided basic functionality for power system automation and were designed to accommodate the technical limitations of the networking technology available for implementation. There has recently been a vast improvement in networking technology that has changed dramatically what is now feasible for power system automation in the substation. Technologies such as switched Ethernet, TCP/IP, high-speed wide area networks, and high-performance low-cost computers are providing capabilities that could barely be imagined when most legacy substation automation protocols were designed. In order to take advantage of modern technology to deliver additional new benefits to users of substation automation, the International Electrotechnical Commission (IEC) has developed and released a new global standard for substation automation: IEC 61850. The paper provides a basic technical overview of IEC 61850 and discusses the benefits of each major aspect of the standard. The concept of a virtual model comprising both physical and logical device models that includes a set of standardized communications services are described along with explanations of how these standardized models, object naming conventions, and communication services bring significant benefits to the substation automation user. New services to support self-describing devices and object-orient peer-to-peer data exchange are explained with an emphasis on how these services can be applied to reduce costs for substation automation. The substation configuration language (SCL) of IEC 61850 is presented with information on how the standardization of substation configuration will impact the future of substation automation. The paper concludes with a brief introduction to the UCA International Users Group as a forum where users and suppliers cooperate in improving substation automation with testing, education, and demonstrations of IEC 61850 and other IEC standards technology",
"title": ""
},
{
"docid": "7ec9f6b40242a732282520f1a4808d49",
"text": "In this paper, a novel technique to enhance the bandwidth of substrate integrated waveguide cavity backed slot antenna is demonstrated. The feeding technique to the cavity backed antenna has been modified by introducing offset feeding of microstrip line along with microstrip to grounded coplanar waveguide transition which helps to excite TE120 mode in the cavity and also to get improvement in impedance matching to the slot antenna simultaneously. The proposed antenna is designed to resonate in X band (8-12 GHz) and shows a resonance at 10.2 GHz with a bandwidth of 4.2% and a gain of 5.6 dBi, 15.6 dB front to back ratio and -30 dB maximum cross polarization level.",
"title": ""
},
{
"docid": "9d2ec490b7efb23909abdbf5f209f508",
"text": "Terrestrial Laser scanner (TLS) has been widely used in our recent architectural heritage projects and huge quantity of point cloud data was gotten. In order to process the huge quantity of point cloud data effectively and reconstruct their 3D models, more effective methods should be developed based on existing automatic or semiautomatic point cloud processing algorithms. Here introduce a new algorithm for rapid extracting the pillar features of Chinese ancient buildings from their point cloud data, the algorithm has the least human interaction in the data processing and is more efficient to extract pillars from point cloud data than existing feature extracting algorithms. With this algorithm we identify the pillar features by dividing the point cloud into slices firstly, and then get the projective parameters of pillar objects in selected slices, the next compare the local projective parameters in adjacent slices, the next combine them to get the global parameters of the pillars and at last reconstruct the 3d pillar models.",
"title": ""
},
{
"docid": "700c016add5f44c3fbd560d84b83b290",
"text": "This paper describes a novel framework, called I<scp>n</scp>T<scp>ens</scp>L<scp>i</scp> (\"intensely\"), for producing fast single-node implementations of dense tensor-times-matrix multiply (T<scp>tm</scp>) of arbitrary dimension. Whereas conventional implementations of T<scp>tm</scp> rely on explicitly converting the input tensor operand into a matrix---in order to be able to use any available and fast general matrix-matrix multiply (G<scp>emm</scp>) implementation---our framework's strategy is to carry out the T<scp>tm</scp> <i>in-place</i>, avoiding this copy. As the resulting implementations expose tuning parameters, this paper also describes a heuristic empirical model for selecting an optimal configuration based on the T<scp>tm</scp>'s inputs. When compared to widely used single-node T<scp>tm</scp> implementations that are available in the Tensor Toolbox and Cyclops Tensor Framework (C<scp>tf</scp>), In-TensLi's in-place and input-adaptive T<scp>tm</scp> implementations achieve 4× and 13× speedups, showing Gemm-like performance on a variety of input sizes.",
"title": ""
},
{
"docid": "9afb086e38b883676a503bb10fba3e8f",
"text": "This paper reports a structured literature survey of research in wearable technology for upper-extremity rehabilitation, e.g., after stroke, spinal cord injury, for multiple sclerosis patients or even children with cerebral palsy. A keyword based search returned 61 papers relating to this topic. Examination of the abstracts of these papers identified 19 articles describing distinct wearable systems aimed at upper extremity rehabilitation. These are classified in three categories depending on their functionality: movement and posture monitoring; monitoring and feedback systems that support rehabilitation exercises, serious games for rehabilitation training. We characterize the state of the art considering respectively the reported performance of these technologies, availability of clinical evidence, or known clinical applications.",
"title": ""
},
{
"docid": "e5f30c0d2c25b6b90c136d1c84ba8a75",
"text": "Modern systems for real-time hand tracking rely on a combination of discriminative and generative approaches to robustly recover hand poses. Generative approaches require the specification of a geometric model. In this paper, we propose a the use of sphere-meshes as a novel geometric representation for real-time generative hand tracking. How tightly this model fits a specific user heavily affects tracking precision. We derive an optimization to non-rigidly deform a template model to fit the user data in a number of poses. This optimization jointly captures the user's static and dynamic hand geometry, thus facilitating high-precision registration. At the same time, the limited number of primitives in the tracking template allows us to retain excellent computational performance. We confirm this by embedding our models in an open source real-time registration algorithm to obtain a tracker steadily running at 60Hz. We demonstrate the effectiveness of our solution by qualitatively and quantitatively evaluating tracking precision on a variety of complex motions. We show that the improved tracking accuracy at high frame-rate enables stable tracking of extended and complex motion sequences without the need for per-frame re-initialization. To enable further research in the area of high-precision hand tracking, we publicly release source code and evaluation datasets.",
"title": ""
},
{
"docid": "997993e389cdb1e40714e20b96927890",
"text": "Developer support forums are becoming more popular than ever. Crowdsourced knowledge is an essential resource for many developers yet it can raise concerns about the quality of the shared content. Most existing research efforts address the quality of answers posted by Q&A community members. In this paper, we explore the quality of questions and propose a method of predicting the score of questions on Stack Overflow based on sixteen factors related to questions' format, content and interactions that occur in the post. We performed an extensive investigation to understand the relationship between the factors and the scores of questions. The multiple regression analysis shows that the question's length of the code, accepted answer score, number of tags and the count of views, comments and answers are statistically significantly associated with the scores of questions. Our findings can offer insights to community-based Q&A sites for improving the content of the shared knowledge.",
"title": ""
},
{
"docid": "937c8e25440c52fc6fde84d59c60ba7a",
"text": "We describe how paperXML, a logical document structure markup for scholarly articles, is generated on the basis of OCR tool outputs. PaperXML has been initially developed for the ACL Anthology Searchbench. The main purpose was to robustly provide uniform access to sentences in ACL Anthology papers from the past 46 years, ranging from scanned, typewriter-written conference and workshop proceedings papers, up to recent high-quality typeset, born-digital journal articles, with varying layouts. PaperXML markup includes information on page and paragraph breaks, section headings, footnotes, tables, captions, boldface and italics character styles as well as bibliographic and publication metadata. The role of paperXML in the ACL Contributed Task Rediscovering 50 Years of Discoveries is to serve as fall-back source (1) for older, scanned papers (mostly published before the year 2000), for which born-digital PDF sources are not available, (2) for borndigital PDF papers on which the PDFExtract method failed, (3) for document parts where PDFExtract does not output useful markup such as currently for tables. We sketch transformation of paperXML into the ACL Contributed Task’s TEI P5 XML.",
"title": ""
},
{
"docid": "0886827f658cd8744e926bcc1396769f",
"text": "An integrator circuit is presented in this paper that has used Differntial Difference Current Conveyor Transconductance Amplifier (DDCCTA). It has one DDCCTA and one passive component. It has been realized only with first order low pass response. The operation of the circuit has been observed and enforced at a supply voltage of ± 1.8V (bias current 50pA) using cadence and the model parameters of gpdk 180nm CMOS technology. The worthy of the proposed circuit has been test checked using DDCCTA and further tested for its efficiency on a laboratory breadboard. In this commercially available AD844AN and LM13600 ICs are used. Further, the circuit presented in this paper is impermeable to noise, possessing low voltage and insensitive to temperature.",
"title": ""
},
{
"docid": "d56e64ac41b4437a4c1409f17a6c7cf2",
"text": "A high step-up forward flyback converter with nondissipative snubber for solar energy application is introduced here. High gain DC/DC converters are the key part of renewable energy systems .The designing of high gain DC/DC converters is imposed by severe demands. It produces high step-up voltage gain by using a forward flyback converter. The energy in the coupled inductor leakage inductance can be recycled via a nondissipative snubber on the primary side. It consists of a combination of forward and flyback converter on the secondary side. It is a hybrid type of forward and flyback converter, sharing the transformer for increasing the utilization factor. By stacking the outputs of them, extremely high voltage gain can be obtained with small volume and high efficiency even with a galvanic isolation. The separated secondary windings in low turn-ratio reduce the voltage stress of the secondary rectifiers, contributing to achievement of high efficiency. Here presents a high step-up topology employing a series connected forward flyback converter, which has a series connected output for high boosting voltage-transfer gain. A MATLAB/Simulink model of the Photo Voltaic (PV) system using Maximum Power Point Tracking (MPPT) has been implimented along with a DC/DC hardware prototype.",
"title": ""
},
{
"docid": "6a490e3bc9e03222ebaaa6484de4b6a6",
"text": "This paper introduces GlobalFS, a POSIX-compliant geographically distributed file system. GlobalFS builds on two fundamental building blocks, an atomic multicast group communication abstraction and multiple instances of a single-site data store. We define four execution modes and show how all file system operations can be implemented with these modes while ensuring strong consistency and tolerating failures. We describe the GlobalFS prototype in detail and report on an extensive performance assessment. We have deployed GlobalFS across all EC2 regions and show that the system scales geographically, providing performance comparable to other state-of-the-art distributed file systems for local commands and allowing for strongly consistent operations over the whole system. The code of GlobalFS is available as open source.",
"title": ""
},
{
"docid": "11644dafde30ee5608167c04cb1f511c",
"text": "Dynamic Adaptive Streaming over HTTP (DASH) enables the video player to adapt the bitrate of the video while streaming to ensure playback without interruptions even with varying throughput. A DASH server hosts multiple representations of the same video, each of which is broken down into small segments of fixed playback duration. The video bitrate adaptation is purely driven by the player at the endhost. Typically, the player employs an Adaptive Bitrate (ABR) algorithm, that determines the most appropriate representation for the next segment to be downloaded, based on the current network conditions and user preferences. The aim of an ABR algorithm is to dynamically manage the Quality of Experience (QoE) of the user during the playback. ABR algorithms manage the QoE by maximizing the bitrate while at the same time trying to minimize the other QoE metrics: playback start time, duration and number of buffering events, and the number of bitrate switching events. Typically, the ABR algorithms manage the QoE by using the measured network throughput and buffer occupancy to adapt the playback bitrate. However, due to the video encoding schemes employed, the sizes of the individual segments may vary significantly. For low bandwidth networks, fluctuation in the segment sizes results in inaccurate estimation the expected segment fetch times, thereby resulting in inaccurate estimation of the optimum bitrate. In this paper we demonstrate how the Segment-Aware Rate Adaptation (SARA) algorithm, that considers the measured throughput, buffer occupancy, and the variation in segment sizes helps in better management of the users' QoE in a DASH system. By comparing with a typical throughput-based and buffer-based adaptation algorithm under varying network conditions, we demonstrate that SARA manages the QoE better, especially in a low bandwidth network. We also developed AStream, an open-source Python-based emulated DASH-video player that was used to evaluate three different ABR algorithms and measure the QoE metrics with each of them.",
"title": ""
},
{
"docid": "0cc665089be9aa8217baac32f0385f41",
"text": "Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process — incorporating these networks into mission critical processes such as medical diagnosis, planning and control — requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.",
"title": ""
},
{
"docid": "b17f5cfea81608e5034121113dbc8de4",
"text": "Every question asked by a therapist may be seen to embody some intent and to arise from certain assumptions. Many questions are intended to orient the therapist to the client's situation and experiences; others are asked primarily to provoke therapeutic change. Some questions are based on lineal assumptions about the phenomena being addressed; others are based on circular assumptions. The differences among these questions are not trivial. They tend to have dissimilar effects. This article explores these issues and offers a framework for distinguishing four major groups of questions. The framework may be used by therapists to guide their decision making about what kinds of questions to ask, and by researchers to study different interviewing styles.",
"title": ""
},
{
"docid": "ba695228c0fbaf91d6db972022095e98",
"text": "This study evaluated the critical period hypothesis for second language (L2) acquisition. The participants were 240 native speakers of Korean who differed according to age of arrival (AOA) in the United States (1 to 23 years), but were all experienced in English (mean length of residence 5 15 years). The native Korean participants’ pronunciation of English was evaluated by having listeners rate their sentences for overall degree of foreign accent; knowledge of English morphosyntax was evaluated using a 144-item grammaticality judgment test. As AOA increased, the foreign accents grew stronger, and the grammaticality judgment test scores decreased steadily. However, unlike the case for the foreign accent ratings, the effect of AOA on the grammaticality judgment test scores became nonsignificant when variables confounded with AOA were controlled. This suggested that the observed decrease in morphosyntax scores was not the result of passing a maturationally defined critical period. Additional analyses showed that the score for sentences testing knowledge of rule based, generalizable aspects of English morphosyntax varied as a function of how much education the Korean participants had received in the United States. The scores for sentences testing lexically based aspects of English morphosyntax, on the other hand, depended on how much the Koreans used English. © 1999 Academic Press",
"title": ""
},
{
"docid": "0d020e98448f2413e271c70e2a321fb4",
"text": "Material classification is an important application in computer vision. The inherent property of materials to partially polarize the reflected light can serve as a tool to classify them. In this paper, a real-time polarization sensing CMOS image sensor using a wire grid polarizer is proposed. The image sensor consist of an array of 128 × 128 pixels, occupies an area of 5 × 4 mm2 and it has been designed and fabricated in a 180-nm CMOS process. We show that this image sensor can be used to differentiate between metal and dielectric surfaces in real-time due to the different nature in partially polarizing the specular and diffuse reflection components of the reflected light. This is achieved by calculating the Fresnel reflection coefficients, the degree of polarization and the variations in the maximum and minimum transmitted intensities for varying specular angle of incidence. Differences in the physical parameters for various metal surfaces result in different surface reflection behavior, influencing the Fresnel reflection coefficients. It is also shown that the image sensor can differentiate among various metals by sensing the change in the polarization Fresnel ratio.",
"title": ""
},
{
"docid": "c240da3cde126606771de3e6b3432962",
"text": "Oscillations in the alpha and beta bands can display either an event-related blocking response or an event-related amplitude enhancement. The former is named event-related desynchronization (ERD) and the latter event-related synchronization (ERS). Examples of ERS are localized alpha enhancements in the awake state as well as sigma spindles in sleep and alpha or beta bursts in the comatose state. It was found that alpha band activity can be enhanced over the visual region during a motor task, or during a visual task over the sensorimotor region. This means ERD and ERS can be observed at nearly the same time; both form a spatiotemporal pattern, in which the localization of ERD characterizes cortical areas involved in task-relevant processing, and ERS marks cortical areas at rest or in an idling state.",
"title": ""
},
{
"docid": "95f1862369f279f20fc1fb10b8b41ea8",
"text": "This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted , or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Intrusion detection in wireless ad-hoc networks / editors, Nabendu Chaki and Rituparna Chaki. pages cm Includes bibliographical references and index. Contents Preface ix a b o u t t h e e d i t o r s xi c o n t r i b u t o r s xiii chaP t e r 1 intro d u c t i o n 1 Nova ru N De b , M a N a l i CH a k r a bor T y, a N D N a beN Du CH a k i chaP t e r 2 a r c h i t e c t u r e a n d o r g a n i z at i o n is s u e s 43 M a N a l i CH a k r a bor T y, Nova ru N De b , De bDu T Ta ba r M a N roy, a N D r i T u pa r N a CH a k i chaP t e r 3 routin g f o r …",
"title": ""
},
{
"docid": "03a7bcafb322ee8f7812d66abbd36ce6",
"text": "This paper presents a Deep Bidirectional Long Short Term Memory (LSTM) based Recurrent Neural Network architecture for text recognition. This architecture uses Connectionist Temporal Classification (CTC) for training to learn the labels of an unsegmented sequence with unknown alignment. This work is motivated by the results of Deep Neural Networks for isolated numeral recognition and improved speech recognition using Deep BLSTM based approaches. Deep BLSTM architecture is chosen due to its ability to access long range context, learn sequence alignment and work without the need of segmented data. Due to the use of CTC and forward backward algorithms for alignment of output labels, there are no unicode re-ordering issues, thus no need of lexicon or postprocessing schemes. This is a script independent and segmentation free approach. This system has been implemented for the recognition of unsegmented words of printed Oriya text. This system achieves 4.18% character level error and 12.11% word error rate on printed Oriya text.",
"title": ""
}
] |
scidocsrr
|
deefdb6e5bce6cd80d5f5d349a92c5f2
|
MoFAP: A Multi-level Representation for Action Recognition
|
[
{
"docid": "c439a5c8405d8ba7f831a5ac4b1576a7",
"text": "1. Cao, L., Liu, Z., Huang, T.S.: Cross-dataset action detection. In: CVPR (2010). 2. Yang, Y., Ramanan, D.: Articulated pose estimation with flexible mixtures-of-parts. In: CVPR (2011) 3. Lan, T., etc.: Discriminative figure-centric models for joint action localization and recognition. In: ICCV (2011). 4. Tian, Y., Sukthankar, R., Shah, M.: Spatiotemporal deformable part models for action detection. In: CVPR (2013). 5. Wang, H., Schmid, C.: Action recognition with improved trajectories. In: ICCV (2013). Experiments",
"title": ""
},
{
"docid": "112fc675cce705b3bab9cb66ca1c08da",
"text": "Our Approach, 0.66 GIST 29.7 Spa>al Pyramid HOG 29.8 Spa>al Pyramid SIFT 34.4 ROI-‐GIST 26.5 Scene DPM 30.4 MM-‐Scene 28.0 Object Bank 37.6 Ours 38.1 Ours+GIST 44.0 Ours+SP 46.4 Ours+GIST+SP 47.5 Ours+DPM 42.4 Ours+GIST+DPM 46.9 Ours+SP+DPM 46.4 GIST+SP+DPM 43.1 Ours+GIST+SP+DPM 49.4 Two key requirements • representa,ve: Need to occur frequently enough • discrimina,ve: Need to be different enough from the rest of the “visual world” Goal: a mid-‐level visual representa>on Experimental Analysis Bonus: works even be`er if weakly supervised!",
"title": ""
},
{
"docid": "2e8251644f82f3a965cf6360416eaaaa",
"text": "The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) \"actions\" and 2) \"activities.\" \"Actions\" are characterized by simple motion patterns typically executed by a single human. \"Activities\" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.",
"title": ""
},
{
"docid": "52bce24f8ec738f9b9dfd472acd6b101",
"text": "Human action recognition in videos is a challenging problem with wide applications. State-of-the-art approaches often adopt the popular bag-of-features representation based on isolated local patches or temporal patch trajectories, where motion patterns like object relationships are mostly discarded. This paper proposes a simple representation specifically aimed at the modeling of such motion relationships. We adopt global and local reference points to characterize motion information, so that the final representation can be robust to camera movement. Our approach operates on top of visual codewords derived from local patch trajectories, and therefore does not require accurate foreground-background separation, which is typically a necessary step to model object relationships. Through an extensive experimental evaluation, we show that the proposed representation offers very competitive performance on challenging benchmark datasets, and combining it with the bag-of-features representation leads to substantial improvement. On Hollywood2, Olympic Sports, and HMDB51 datasets, we obtain 59.5%, 80.6% and 40.7% respectively, which are the best reported results to date.",
"title": ""
}
] |
[
{
"docid": "aa5a0018ae771cf6cfbca628b5d1e1fd",
"text": "Cloud computing discusses about sharing any imaginable entity such as process units, storage devices or software. The provided service is utterly economical and expandable. Cloud computing attractive benefits entice huge interest of both business owners and cyber thefts. Consequently, the “computer forensic investigation” step into the play to find evidences against criminals. As a result of the new technology and methods used in cloud computing, the forensic investigation techniques face different types of issues while inspecting the case. The most profound challenges are difficulties to deal with different rulings obliged on variety of data saved in different locations, limited access to obtain evidences from cloud and even the issue of seizing the physical evidence for the sake of integrity validation or evidence presentation. This paper suggests a simple yet very useful solution to conquer the aforementioned issues in forensic investigation of cloud systems. Utilizing TPM in hypervisor, implementing multi-factor authentication and updating the cloud service provider policy to provide persistent storage devices are some of the recommended solutions. Utilizing the proposed solutions, the cloud service will be compatible to the current digital forensic investigation practices; alongside it brings the great advantage of being investigable and consequently the trust of the client.",
"title": ""
},
{
"docid": "1a6ece40fa87e787f218902eba9b89f7",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
},
{
"docid": "29f17b7d7239a2845d513976e4981d6a",
"text": "Agriculture is the backbone of the Indian economy. As all know that demand of agricultural products are increasing day by day as the population is ever increasing, so there is a need to minimize labor, limit the use of water and increase the production of crops. So there is a need to switch from traditional agriculture to the modern agriculture. The introduction of internet of things into agriculture modernization will help solve these problems. This paper presents the IOT based agriculture production system which will monitor or analyze the crop environment like temperature humidity and moisture content in soil. This paper uses the integration of RFID technology and sensors. As both have different objective sensors are for sensing and RIFD technology is for identification This will effectively solve the problem of farmer, increase the yield and saves his time, power, money.",
"title": ""
},
{
"docid": "53aa1145047cc06a1c401b04896ff1b1",
"text": "Due to the increasing availability of whole slide scanners facilitating digitization of histopathological tissue, there is a strong demand for the development of computer based image analysis systems. In this work, the focus is on the segmentation of the glomeruli constituting a highly relevant structure in renal histopathology, which has not been investigated before in combination with CNNs. We propose two different CNN cascades for segmentation applications with sparse objects. These approaches are applied to the problem of glomerulus segmentation and compared with conventional fully-convolutional networks. Overall, with the best performing cascade approach, single CNNs are outperformed and a pixel-level Dice similarity coefficient of 0.90 is obtained. Combined with qualitative and further object-level analyses the obtained results are assessed as excellent also compared to recent approaches. In conclusion, we can state that especially one of the proposed cascade networks proved to be a highly powerful tool for segmenting the renal glomeruli providing best segmentation accuracies and also keeping the computing time at a low level.",
"title": ""
},
{
"docid": "82535c102f41dc9d47aa65bd71ca23be",
"text": "We report on an experiment that examined the influence of anthropomorphism and perceived agency on presence, copresence, and social presence in a virtual environment. The experiment varied the level of anthropomorphism of the image of interactants: high anthropomorphism, low anthropomorphism, or no image. Perceived agency was manipulated by telling the participants that the image was either an avatar controlled by a human, or an agent controlled by a computer. The results support the prediction that people respond socially to both human and computer-controlled entities, and that the existence of a virtual image increases tele-presence. Participants interacting with the less-anthropomorphic image reported more copresence and social presence than those interacting with partners represented by either no image at all or by a highly anthropomorphic image of the other, indicating that the more anthropomorphic images set up higher expectations that lead to reduced presence when these expectations were not met.",
"title": ""
},
{
"docid": "e9b7eba9f15440ec7112a1938fad1473",
"text": "Recovery is not a new concept within mental health, although in recent times, it has come to the forefront of the policy agenda. However, there is no universal definition of recovery, and it is a contested concept. The aim of this study was to examine the British literature relating to recovery in mental health. Three contributing groups are identified: service users, health care providers and policy makers. A review of the literature was conducted by accessing all relevant published texts. A search was conducted using these terms: 'recovery', 'schizophrenia', 'psychosis', 'mental illness' and 'mental health'. Over 170 papers were reviewed. A thematic analysis was conducted. Six main themes emerged, which were examined from the perspective of the stakeholder groups. The dominant themes were identity, the service provision agenda, the social domain, power and control, hope and optimism, risk and responsibility. Consensus was found around the belief that good quality care should be made available to service users to promote recovery both as inpatient or in the community. However, the manner in which recovery was defined and delivered differed between the groups.",
"title": ""
},
{
"docid": "acbdb3f3abf3e56807a4e7f60869a2ee",
"text": "In this paper we present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multi-stereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multi-grid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the apparent contours of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.",
"title": ""
},
{
"docid": "e7a51207dd5119ad22fbf35a7b4afca7",
"text": "AIM\nTo characterize types of university students based on satisfaction with life domains that affect eating habits, satisfaction with food-related life and subjective happiness.\n\n\nMATERIALS AND METHODS\nA questionnaire was applied to a nonrandom sample of 305 students of both genders in five universities in Chile. The questionnaire included the abbreviated Multidimensional Student's Life Satisfaction Scale (MSLSS), Satisfaction with Food-related Life Scale (SWFL) and the Subjective Happiness Scale (SHS). Eating habits, frequency of food consumption in and outside the place of residence, approximate height and weight and sociodemographic variables were measured.\n\n\nRESULTS\nUsing factor analysis, the five-domain structure of the MSLSS was confirmed with 26 of the 30 items of the abbreviated version: Family, Friends, Self, Environment and University. Using cluster analysis four types of students were distinguished that differ significantly in the MSLSS global and domain scores, SWFL and SHS scores, gender, ownership of a food allowance card funded by the Chilean government, importance attributed to food for well-being and socioeconomic status.\n\n\nCONCLUSIONS\nHigher levels of life satisfaction and happiness are associated with greater satisfaction with food-related life. Other major life domains that affect students' subjective well-being are Family, Friends, University and Self. Greater satisfaction in some domains may counterbalance the lower satisfaction in others.",
"title": ""
},
{
"docid": "8b34b86cb1ce892a496740bfbff0f9c5",
"text": "Common subexpression elimination is commonly employed to reduce the number of operations in DSP algorithms after decomposing constant multiplications into shifts and additions. Conventional optimization techniques for finding common subexpressions can optimize constant multiplications with only a single variable at a time, and hence cannot fully optimize the computations with multiple variables found in matrix form of linear systems like DCT, DFT etc. We transform these computations such that all common subexpressions involving any number of variables can be detected. We then present heuristic algorithms to select the best set of common subexpressions. Experimental results show the superiority of our technique over conventional techniques for common subexpression elimination.",
"title": ""
},
{
"docid": "b4e942dc860e127d6370d4425176d62f",
"text": "Several years ago we introduced the Balanced Scorecard (Kaplan and Norton 1992). We began with the premise that an exclusive reliance on financial measures in a management system is insufficient. Financial measures are lag indicators that report on the outcomes from past actions. Exclusive reliance on financial indicators could promote behavior that sacrifices long-term value creation for short-term performance (Porter 1992; AICPA 1994). The Balanced Scorecard approach retains measures of financial performance-the lagging outcome indicators-but supplements these with measures on the drivers, the lead indicators, of future financial performance.",
"title": ""
},
{
"docid": "4294edb250b333a0fe5863860bcb7a8a",
"text": "Present-day malware analysis techniques use both virtualized and emulated environments to analyze malware. The reason is that such environments provide isolation and system restoring capabilities, which facilitate automated analysis of malware samples. However, there exists a class of malware, called VM-aware malware, which is capable of detecting such environments and then hide its malicious behavior to foil the analysis. Because of the artifacts introduced by virtualization or emulation layers, it has always been and will always be possible for malware to detect virtual environments.\n The definitive way to observe the actual behavior of VM-aware malware is to execute them in a system running on real hardware, which is called a \"bare-metal\" system. However, after each analysis, the system must be restored back to the previous clean state. This is because running a malware program can leave the system in an instable/insecure state and/or interfere with the results of a subsequent analysis run. Most of the available state-of-the-art system restore solutions are based on disk restoring and require a system reboot. This results in a significant downtime between each analysis. Because of this limitation, efficient automation of malware analysis in bare-metal systems has been a challenge.\n This paper presents the design, implementation, and evaluation of a malware analysis framework for bare-metal systems that is based on a fast and rebootless system restore technique. Live system restore is accomplished by restoring the entire physical memory of the analysis operating system from another, small operating system that runs outside of the target OS. By using this technique, we were able to perform a rebootless restore of a live Windows system, running on commodity hardware, within four seconds. We also analyzed 42 malware samples from seven different malware families, that are known to be \"silent\" in a virtualized or emulated environments, and all of them showed their true malicious behavior within our bare-metal analysis environment.",
"title": ""
},
{
"docid": "b2132ee641e8b2ae5da9f921e3f0ecd5",
"text": "action into more concrete ones. Each dashed arrow maps a task into a plan of actions. Cambridge University Press 978-1-107-03727-4 — Automated Planning and Acting Malik Ghallab , Dana Nau , Paolo Traverso Excerpt More Information www.cambridge.org © in this web service Cambridge University Press 1.2 Conceptual View of an Actor 7 above it, and decides what activities need to be performed to carry out those tasks. Performing a task may involve reining it into lower-level steps, issuing subtasks to other components below it in the hierarchy, issuing commands to be executed by the platform, and reporting to the component that issued the task. In general, tasks in different parts of the hierarchymay involve concurrent use of different types of models and specialized reasoning functions. This example illustrates two important principles of deliberation: hierarchical organization and continual online processing. Hierarchically organized deliberation. Some of the actions the actor wishes to perform do not map directly into a command executable by its platform. An action may need further reinement and planning. This is done online and may require different representations, tools, and techniques from the ones that generated the task. A hierarchized deliberation process is not intended solely to reduce the search complexity of ofline plan synthesis. It is needed mainly to address the heterogeneous nature of the actions about which the actor is deliberating, and the corresponding heterogeneous representations and models that such deliberations require. Continual online deliberation.Only in exceptional circumstances will the actor do all of its deliberation ofline before executing any of its planned actions. Instead, the actor generally deliberates at runtime about how to carry out the tasks it is currently performing. The deliberation remains partial until the actor reaches its objective, including through lexible modiication of its plans and retrials. The actor’s predictive models are often limited. Its capability to acquire and maintain a broad knowledge about the current state of its environment is very restricted. The cost of minor mistakes and retrials are often lower than the cost of extensive modeling, information gathering, and thorough deliberation. Throughout the acting process, the actor reines and monitors its actions; reacts to events; and extends, updates, and repairs its plan on the basis of its perception focused on the relevant part of the environment. Different parts of the actor’s hierarchy often use different representations of the state of the actor and its environment. These representations may correspond to different amounts of detail in the description of the state and different mathematical constructs. In Figure 1.2, a graph of discrete locations may be used at the upper levels, while the lower levels may use vectors of continuous coniguration variables for the robot limbs. Finally, because complex deliberations can be compiled down by learning into lowlevel commands, the frontier between deliberation functions and the execution platform is not rigid; it evolves with the actor’s experience.",
"title": ""
},
{
"docid": "8e0ec02b22243b4afb04a276712ff6cf",
"text": "1 Morphology with or without Affixes The last few years have seen the emergence of several clearly articulated alternative approaches to morphology. One such approach rests on the notion that only stems of the so-called lexical categories (N, V, A) are morpheme \"pieces\" in the traditional sense—connections between (bundles of) meaning (features) and (bundles of) sound (features). What look like affixes on this view are merely the by-product of morphophonological rules called word formation rules (WFRs) that are sensitive to features associated with the lexical categories, called lexemes. Such an amorphous or affixless theory, adumbrated by Beard (1966) and Aronoff (1976), has been articulated most notably by Anderson (1992) and in major new studies by Aronoff (1992) and Beard (1991). In contrast, Lieber (1992) has refined the traditional notion that affixes as well as lexical stems are \"mor-pheme\" pieces whose lexical entries relate phonological form with meaning and function. For Lieber and other \"lexicalists\" (see, e.g., Jensen 1990), the combining of lexical items creates the words that operate in the syntax. In this paper we describe and defend a third theory of morphology , Distributed Morphology, 1 which combines features of the affixless and the lexicalist alternatives. With Anderson, Beard, and Aronoff, we endorse the separation of the terminal elements involved in the syntax from the phonological realization of these elements. With Lieber and the lexicalists, on the other hand, we take the phonological realization of the terminal elements in the syntax to be governed by lexical (Vocabulary) entries that relate bundles of morphosyntactic features to bundles of pho-nological features. We have called our approach Distributed Morphology (hereafter DM) to highlight the fact that the machinery of what traditionally has been called morphology is not concentrated in a single component of the gram",
"title": ""
},
{
"docid": "209472a5a37a3bb362e43d1b0abb7fd3",
"text": "The goals of the review are threefold: (a) to highlight the educational and employment consequences of poorly developed mathematical competencies; (b) overview the characteristics of children with mathematical learning disability (MLD) and with persistently low achievement (LA) in mathematics; and (c) provide a primer on cognitive science research that is aimed at identifying the cognitive mechanisms underlying these learning disabilities and associated cognitive interventions. Literatures on the educational and economic consequences of poor mathematics achievement were reviewed and integrated with reviews of epidemiological, behavioral genetic, and cognitive science studies of poor mathematics achievement. Poor mathematical competencies are common among adults and result in employment difficulties and difficulties in many common day-to-day activities. Among students, ∼ 7% of children and adolescents have MLD and another 10% show persistent LA in mathematics, despite average abilities in most other areas. Children with MLD and their LA peers have deficits in understanding and representing numerical magnitude, difficulties retrieving basic arithmetic facts from long-term memory, and delays in learning mathematical procedures. These deficits and delays cannot be attributed to intelligence but are related to working memory deficits for children with MLD, but not LA children. These individuals have identifiable number and memory delays and deficits that seem to be specific to mathematics learning. Interventions that target these cognitive deficits are in development and preliminary results are promising.",
"title": ""
},
{
"docid": "d76980f3a0b4e0dab21583b75ee16318",
"text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.",
"title": ""
},
{
"docid": "05b6f7fd65ae6eee7fb3ae44e98fb2f9",
"text": "We explore learning-based approaches for feedback control of a dexterous five-finger hand performing non-prehensile manipulation. First, we learn local controllers that are able to perform the task starting at a predefined initial state. These controllers are constructed using trajectory optimization with respect to locally-linear time-varying models learned directly from sensor data. In some cases, we initialize the optimizer with human demonstrations collected via teleoperation in a virtual environment. We demonstrate that such controllers can perform the task robustly, both in simulation and on the physical platform, for a limited range of initial conditions around the trained starting state. We then consider two interpolation methods for generalizing to a wider range of initial conditions: deep learning, and nearest neighbors. We find that nearest neighbors achieve higher performance under full observability, while a neural network proves advantages under partial observability: it uses only tactile and proprioceptive feedback but no feedback about the object (i.e. it performs the task blind) and learns a time-invariant policy. In contrast, the nearest neighbors method switches between time-varying local controllers based on the proximity of initial object states sensed via motion capture. While both generalization methods leave room for improvement, our work shows that (i) local trajectory-based controllers for complex non-prehensile manipulation tasks can be constructed from surprisingly small amounts of training data, and (ii) collections of such controllers can be interpolated to form more global controllers. Results are summarized in the supplementary video: https://youtu.be/E0wmO6deqjo",
"title": ""
},
{
"docid": "185d1c51d1ebd4428a9754a7c68d82d5",
"text": "Intersex disorders are rare congenital malformations with over 80% being diagnosed with congenital adrenal hyperplasia (CAH). It can be challenging to determine the correct gender at birth and a detailed understanding of the embryology and anatomy is crucial. The birth of a child with intersex is a true emergency situation and an immediate transfer to a medical center familiar with the diagnosis and management of intersex conditions should occur. In children with palpable gonads the presence of a Y chromosome is almost certain, since ovotestes or ovaries usually do not descend. Almost all those patients with male pseudohermaphroditism lack Mullerian structures due to MIS production from the Sertoli cells, but the insufficient testosterone stimulation leads to an inadequate male phenotype. The clinical manifestation of all CAH forms is characterized by the virilization of the outer genitalia. Surgical correction techniques have been developed and can provide satisfactory cosmetic and functional results. The discussion of the management of patients with intersex disorders continues. Current data challenge the past practice of sex reassignment. Further data are necessary to formulate guidelines and recommendations fitting for the individual situation of each patient. Until then the parents have to be supplied with the current data and outcome studies to make the correct choice for their child.",
"title": ""
},
{
"docid": "f3aa019816ae399c3fe834ffce3db53e",
"text": "This paper presents a method to incorporate 3D line segments in vision based SLAM. A landmark initialization method that relies on the Plucker coordinates to represent a 3D line is introduced: a Gaussian sum approximates the feature initial state and is updated as new observations are gathered by the camera. Once initialized, the landmarks state is estimated along an EKF-based SLAM approach: constraints associated with the Plucker representation are considered during the update step of the Kalman filter. The whole SLAM algorithm is validated in simulation runs and results obtained with real data are presented.",
"title": ""
},
{
"docid": "aaabe81401e33f7e2bb48dd6d5970f9b",
"text": "Brain tumor is the most life undermining sickness and its recognition is the most challenging task for radio logistics by manual detection due to varieties in size, shape and location and sort of tumor. So, detection ought to be quick and precise and can be obtained by automated segmentation methods on MR images. In this paper, neutrosophic sets based segmentation is performed to detect the tumor. MRI is an intense apparatus over CT to analyze the interior segments of the body and the tumor. Tumor is detected and true, false and indeterminacy values of tumor are determined by this technique and the proposed method produce the beholden results.",
"title": ""
},
{
"docid": "1e2e099c849b165b31b0c36040825464",
"text": "In recent years, there has been a substantial amount of research on quantum computers – machines that exploit quantum mechanical phenomena to solve mathematical problems that are difficult or intractable for conventional computers. If large-scale quantum computers are ever built, they will be able to break many of the public-key cryptosystems currently in use. This would seriously compromise the confidentiality and integrity of digital communications on the Internet and elsewhere. The goal of post-quantum cryptography (also called quantum-resistant cryptography) is to develop cryptographic systems that are secure against both quantum and classical computers, and can interoperate with existing communications protocols and networks. This Internal Report shares the National Institute of Standards and Technology (NIST)’s current understanding about the status of quantum computing and post-quantum cryptography, and outlines NIST’s initial plan to move forward in this space. The report also recognizes the challenge of moving to new cryptographic infrastructures and therefore emphasizes the need for agencies to focus on crypto agility.",
"title": ""
}
] |
scidocsrr
|
f55d24bae178c75216a4836a5dfaf382
|
Depth Creates No Bad Local Minima
|
[
{
"docid": "4531b034f7644a6f5e925cda8cad875e",
"text": "This paper considers global optimization with a black-box unknown objective function that can be non-convex and non-differentiable. Such a difficult optimization problem arises in many real-world applications, such as parameter tuning in machine learning, engineering design problem, and planning with a complex physics simulator. This paper proposes a new global optimization algorithm, called Locally Oriented Global Optimization (LOGO), to aim for both fast convergence in practice and finite-time error bound in theory. The advantage and usage of the new algorithm are illustrated via theoretical analysis and an experiment conducted with 11 benchmark test functions. Further, we modify the LOGO algorithm to specifically solve a planning problem via policy search with continuous state/action space and long time horizon while maintaining its finite-time error bound. We apply the proposed planning method to accident management of a nuclear power plant. The result of the application study demonstrates the practical utility of our method.",
"title": ""
},
{
"docid": "b1b5b1dd170bff4bd33d11d6b3959d11",
"text": "Neural Networks are formally hard to train. How can we circumvent hardness results? • Over specified networks: While over specification seems to speedup training , formally hardness results are valid in the improper model. • Changing the activation function: While changing the activation function from sigmoid to ReLu has lead to faster convergence of SGD methods, formally these networks are still hard.",
"title": ""
}
] |
[
{
"docid": "9126eda46fe299bc3067bace979cdf5e",
"text": "This paper considers the intersection of technology and play through the novel approach of gamification and its application to early years education. The intrinsic connection between play and technology is becoming increasingly significant in early years education. By creating an awareness of the early years adoption of technology into guiding frameworks, and then exploring the makeup of gaming elements, this paper draws connections for guiding principles in adopting more technology-focused play opportunities for Generation Alpha.",
"title": ""
},
{
"docid": "16dd74e72700ce82502f75054b5c3fe6",
"text": "Multiple access (MA) technology is of most importance for 5G. Non-orthogonal multiple access (NOMA) utilizing power domain and advanced receiver has been considered as a promising candidate MA technology recently. In this paper, the NOMA concept is presented toward future enhancements of spectrum efficiency in lower frequency bands for downlink of 5G system. Key component technologies of NOMA are presented and discussed including multiuser transmission power allocation, scheduling algorithm, receiver design and combination of NOMA with multi-antenna technology. The performance gains of NOMA are evaluated by system-level simulations with very practical assumptions. Under multiple configurations and setups, the achievable system-level gains of NOMA are shown promising even when practical considerations were taken into account.",
"title": ""
},
{
"docid": "d2ca6e3dbf35d1205eaeece0adb5646f",
"text": "The Self-Organizing Map (SOM) is one of the best known and most popular neural network-based data analysis tools. Many variants of the SOM have been proposed, like the Neural Gas by Martinetz and Schulten, the Growing Cell Structures by Fritzke, and the Tree-Structured SOM by Koikkalainen and Oja. The purpose of such variants is either to make a more flexible topology, suitable for complex data analysis problems or to reduce the computational requirements of the SOM, especially the time-consuming search for the best-matching unit in large maps. We propose here a new variant called the Evolving Tree which tries to combine both of these advantages. The nodes are arranged in a tree topology that is allowed to grow when any given branch receives a lot of hits from the training vectors. The search for the best matching unit and its neighbors is conducted along the tree and is therefore very efficient. A comparison experiment with high dimensional real world data shows that the performance of the proposed method is better than some classical variants of SOM.",
"title": ""
},
{
"docid": "ce402c150d74cbc954378ea7927dfa71",
"text": "The study investigated the influence of extrinsic and intrinsic motivation on employees performance. Subjects for the study consisted of one hundred workers of Flour Mills of Nigeria PLC, Lagos. Data for the study were gathered through the administration of a self-designed questionnaire. The data collected were subjected to appropriate statistical analysis using Pearson Product Moment Correlation Coefficient, and all the findings were tested at 0.05 level of significance. The result obtained from the analysis showed that there existed relationship between extrinsic motivation and the performance of employees, while no relationship existed between intrinsic motivation and employees performance. On the basis of these findings, implications of the findings for future study were stated.",
"title": ""
},
{
"docid": "d2ef29c6e397ac5bd0320ec6d0238f91",
"text": "It has been speculated that marine microplastics may cause negative effects on benthic marine organisms and increase bioaccumulation of persistent organic pollutants (POPs). Here, we provide the first controlled study of plastic effects on benthic organisms including transfer of POPs. The effects of polystyrene (PS) microplastic on survival, activity, and bodyweight, as well as the transfer of 19 polychlorinated biphenyls (PCBs), were assessed in bioassays with Arenicola marina (L.). PS was pre-equilibrated in natively contaminated sediment. A positive relation was observed between microplastic concentration in the sediment and both uptake of plastic particles and weight loss by A. marina. Furthermore, a reduction in feeding activity was observed at a PS dose of 7.4% dry weight. A low PS dose of 0.074% increased bioaccumulation of PCBs by a factor of 1.1-3.6, an effect that was significant for ΣPCBs and several individual congeners. At higher doses, bioaccumulation decreased compared to the low dose, which however, was only significant for PCB105. PS had statistically significant effects on the organisms' fitness and bioaccumulation, but the magnitude of the effects was not high. This may be different for sites with different plastic concentrations, or plastics with a higher affinity for POPs.",
"title": ""
},
{
"docid": "1b04911f677767284063133908ab4bb1",
"text": "An increasing number of companies are beginning to deploy services/applications in the cloud computing environment. Enhancing the reliability of cloud service has become a critical and challenging research problem. In the cloud computing environment, all resources are commercialized. Therefore, a reliability enhancement approach should not consume too much resource. However, existing approaches cannot achieve the optimal effect because of checkpoint image-sharing neglect, and checkpoint image inaccessibility caused by node crashing. To address this problem, we propose a cloud service reliability enhancement approach for minimizing network and storage resource usage in a cloud data center. In our proposed approach, the identical parts of all virtual machines that provide the same service are checkpointed once as the service checkpoint image, which can be shared by those virtual machines to reduce the storage resource consumption. Then, the remaining checkpoint images only save the modified page. To persistently store the checkpoint image, the checkpoint image storage problem is modeled as an optimization problem. Finally, we present an efficient heuristic algorithm to solve the problem. The algorithm exploits the data center network architecture characteristics and the node failure predicator to minimize network resource usage. To verify the effectiveness of the proposed approach, we extend the renowned cloud simulator Cloudsim and conduct experiments on it. Experimental results based on the extended Cloudsim show that the proposed approach not only guarantees cloud service reliability, but also consumes fewer network and storage resources than other approaches.",
"title": ""
},
{
"docid": "ae5a1d9874b9fd1358d7768936c85491",
"text": "Photoplethysmography (PPG) is a technique that uses light to noninvasively obtain a volumetric measurement of an organ with each cardiac cycle. A PPG-based system emits monochromatic light through the skin and measures the fraction of the light power which is transmitted through a vascular tissue and detected by a photodetector. Part of thereby transmitted light power is modulated by the vascular tissue volume changes due to the blood circulation induced by the heart beating. This modulated light power plotted against time is called the PPG signal. Pulse Oximetry is an empirical technique which allows the arterial blood oxygen saturation (SpO2 – molar fraction) evaluation from the PPG signals. There have been many reports in the literature suggesting that other arterial blood chemical components molar fractions and concentrations can be evaluated from the PPG signals. Most attempts to perform such evaluation on empirical bases have failed, especially for components concentrations. This paper introduces a non-empirical physical model which can be used to analytically investigate the phenomena of PPG signal. Such investigation would result in simplified engineering models, which can be used to design validating experiments and new types of spectroscopic devices with the potential to assess venous and arterial blood chemical composition in both molar fractions and concentrations non-invasively.",
"title": ""
},
{
"docid": "35ac15f19cefd103f984519e046e407c",
"text": "This paper presents a highly sensitive sensor for crack detection in metallic surfaces. The sensor is inspired by complementary split-ring resonators which have dimensions much smaller than the excitation’s wavelength. The entire sensor is etched in the ground plane of a microstrip line and fabricated using printed circuit board technology. Compared to available microwave techniques, the sensor introduced here has key advantages including high sensitivity, increased dynamic range, spatial resolution, design simplicity, selectivity, and scalability. Experimental measurements showed that a surface crack having 200-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> width and 2-mm depth gives a shift in the resonance frequency of 1.5 GHz. This resonance frequency shift exceeds what can be achieved using other sensors operating in the low GHz frequency regime by a significant margin. In addition, using numerical simulation, we showed that the new sensor is able to resolve a 10-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula>-wide crack (equivalent to <inline-formula> <tex-math notation=\"LaTeX\">$\\lambda $ </tex-math></inline-formula>/4000) with 180-MHz shift in the resonance frequency.",
"title": ""
},
{
"docid": "4d73c50244d16dab6d3773dbeebbae98",
"text": "We describe the latest version of Microsoft's conversational speech recognition system for the Switchboard and CallHome domains. The system adds a CNN-BLSTM acoustic model to the set of model architectures we combined previously, and includes character-based and dialog session aware LSTM language models in rescoring. For system combination we adopt a two-stage approach, whereby acoustic model posteriors are first combined at the senone/frame level, followed by a word-level voting via confusion networks. We also added another language model rescoring step following the confusion network combination. The resulting system yields a 5.1% word error rate on the NIST 2000 Switchboard test set, and 9.8% on the CallHome subset.",
"title": ""
},
{
"docid": "87c17ce9b4bd78f3be037fedf7e558e3",
"text": "Conversational Memory Network: To classify emotion of utterance ui, corresponding histories (hista and histb) are taken. Each history, histλ, contains the preceding K utterances by person Pλ. Histories are modeled into memories and utilized as follows, Memory Representation: Memory representation Mλ = [mλ, ...,mλ ] for histλ is generated using a GRU, λ ∈ {a, b}. Memory Input: Attention mechanism is used to read Mλ. Relevance of each memory mλ’s context with ui is computed using a match operation, pλ = softmax(q i .mλ) , qi = B.ui (1) Memory Output: Weighted combination of memories is calculated using attention scores. oλ = M ′ λ.pλ (2)",
"title": ""
},
{
"docid": "2bf678c98d27501443f0f6fdf35151d7",
"text": "The goal of video summarization is to distill a raw video into a more compact form without losing much semantic information. However, previous methods mainly consider the diversity and representation interestingness of the obtained summary, and they seldom pay sufficient attention to semantic information of resulting frame set, especially the long temporal range semantics. To explicitly address this issue, we propose a novel technique which is able to extract the most semantically relevant video segments (i.e., valid for a long term temporal duration) and assemble them into an informative summary. To this end, we develop a semantic attended video summarization network (SASUM) which consists of a frame selector and video descriptor to select an appropriate number of video shots by minimizing the distance between the generated description sentence of the summarized video and the human annotated text of the original video. Extensive experiments show that our method achieves a superior performance gain over previous methods on two benchmark datasets.",
"title": ""
},
{
"docid": "4ead8caeea4143b8c5deb2ea91e0a141",
"text": "The statistical discrimination and clustering literature has studied the problem of identifying similarities in time series data. Some studies use non-parametric approaches for splitting a set of time series into clusters by looking at their Euclidean distances in the space of points. A new measure of distance between time series based on the normalized periodogram is proposed. Simulation results comparing this measure with others parametric and non-parametric metrics are provided. In particular, the classification of time series as stationary or as non-stationary is discussed. The use of both hierarchical and non-hierarchical clustering algorithms is considered. An illustrative example with economic time series data is also presented. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1c03f51212eca905657ba1361173c055",
"text": "A miscellany of new strategies, experimental techniques and theoretical approaches are emerging in the ongoing battle against cancer. Nevertheless, as new, ground-breaking discoveries relating to many and diverse areas of cancer research are made, scientists often have recourse to mathematical modelling in order to elucidate and interpret these experimental findings. Indeed, experimentalists and clinicians alike are becoming increasingly aware of the possibilities afforded by mathematical modelling, recognising that current medical techniques and experimental approaches are often unable to distinguish between various possible mechanisms underlying important aspects of tumour development. This short treatise presents a concise history of the study of solid tumour growth, illustrating the development of mathematical approaches from the early decades of the twentieth century to the present time. Most importantly these mathematical investigations are interwoven with the associated experimental work, showing the crucial relationship between experimental and theoretical approaches, which together have moulded our understanding of tumour growth and contributed to current anti-cancer treatments. Thus, a selection of mathematical publications, including the influential theoretical studies by Burton, Greenspan, Liotta et al., McElwain and co-workers, Adam and Maggelakis, and Byrne and co-workers are juxtaposed with the seminal experimental findings of Gray et al. on oxygenation and radio-sensitivity, Folkman on angiogenesis, Dorie et al. on cell migration and a wide variety of other crucial discoveries. In this way the development of this field of research through the interactions of these different approaches is illuminated, demonstrating the origins of our current understanding of the disease.",
"title": ""
},
{
"docid": "77f5c568ed065e4f23165575c0a05da6",
"text": "Localization is the problem of determining the position of a mobile robot from sensor data. Most existing localization approaches are passive, i.e., they do not exploit the opportunity to control the robot's effectors during localization. This paper proposes an active localization approach. The approach provides rational criteria for (1) setting the robot's motion direction (exploration), and (2) determining the pointing direction of the sensors so as to most efficiently localize the robot. Furthermore, it is able to deal with noisy sensors and approximative world models. The appropriateness of our approach is demonstrated empirically using a mobile robot in a structured office environment.",
"title": ""
},
{
"docid": "d6d0a5d1ddffaefe6d2f0944e50b3b70",
"text": "We present a generalization of the scalar importance function employed by Metropolis Light Transport (MLT) and related Markov chain rendering algorithms. Although MLT is known for its user-designable mutation rules, we demonstrate that its scalar contribution function is similarly programmable in an unbiased manner. Normally, MLT samples light paths with a tendency proportional to their brightness. For a range of scenes, we demonstrate that this importance function is undesirable and leads to poor sampling behaviour. Instead, we argue that simple user-designable importance functions can concentrate work in transport effects of interest and increase estimator efficiency. Unlike mutation rules, these functions are not encumbered with the calculation of transitional probabilities. We introduce alternative importance functions, which encourage the Markov chain to aggressively pursue sampling goals of interest to the user. In addition, we prove that these importance functions may adapt over the course of a render in an unbiased fashion. To that end, we introduce multi-stage MLT, a general rendering setting for creating such adaptive functions. This allows us to create a noise-sensitive MLT renderer whose importance function explicitly targets noise. Finally, we demonstrate that our techniques are compatible with existing Markov chain rendering algorithms and significantly improve their visual efficiency.",
"title": ""
},
{
"docid": "823c00a4cbbfb3ca5fc302dfeff0fbb3",
"text": "Given that the synthesis of cumulated knowledge is an essential condition for any field to grow and develop, we believe that the enhanced role of IS reviews requires that this expository form be given careful scrutiny. Over the past decade, several senior scholars have made calls for more review papers in our field. While the number of IS review papers has substantially increased in recent years, no prior research has attempted to develop a general framework to conduct and evaluate the rigor of standalone reviews. In this paper, we fill this gap. More precisely, we present a set of guidelines for guiding and evaluating IS literature reviews and specify to which review types they apply. To do so, we first distinguish between four broad categories of review papers and then propose a set of guidelines that are grouped according to the generic phases and steps of the review process. We hope our work will serve as a valuable source for those conducting, evaluating, and/or interpreting reviews in our field.",
"title": ""
},
{
"docid": "d36021ff647a2f2c74dd35a847847a09",
"text": "An ontology is a crucial factor for the success of the Semantic Web and other knowledge-based systems in terms of share and reuse of domain knowledge. However, there are a few concrete ontologies within actual knowledge domains including learning domains. In this paper, we develop an ontology which is an explicit formal specification of concepts and semantic relations among them in philosophy. We call it a philosophy ontology. Our philosophy is a formal specification of philosophical knowledge including knowledge of contents of classical texts of philosophy. We propose a methodology, which consists of detailed guidelines and templates, for constructing text-based ontology. Our methodology consists of 3 major steps and 14 minor steps. To implement the philosophy ontology, we develop an ontology management system based on Topic Maps. Our system includes a semi-automatic translator for creating Topic Map documents from the output of conceptualization steps and other tools to construct, store, retrieve ontologies based on Topic Maps. Our methodology and tools can be applied to other learning domain ontologies, such as history, literature, arts, and music. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9cb16594b916c5d11c189e80c0ac298a",
"text": "This paper describes the design of an innovative and low cost self-assistive technology that is used to facilitate the control of a wheelchair and home appliances by using advanced voice commands of the disabled people. This proposed system will provide an alternative to the physically challenged people with quadriplegics who is permanently unable to move their limbs (but who is able to speak and hear) and elderly people in controlling the motion of the wheelchair and home appliances using their voices to lead an independent, confident and enjoyable life. The performance of this microcontroller based and voice integrated design is evaluated in terms of accuracy and velocity in various environments. The results show that it could be part of an assistive technology for the disabled persons without any third person’s assistance.",
"title": ""
},
{
"docid": "1c269ac67fb954da107229fe4e18dcc8",
"text": "The number of output-voltage levels available in pulsewidth-modulated (PWM) voltage-source inverters can be increased by inserting a split-wound coupled inductor between the upper and lower switches in each inverter leg. Interleaved PWM control of both inverter-leg switches produces three-level PWM voltage waveforms at the center tap of the coupled inductor winding, representing the inverter-leg output terminal, with a PWM frequency twice the switching frequency. The winding leakage inductance is in series with the output terminal, with the main magnetizing inductance filtering the instantaneous PWM-cycle voltage differences between the upper and lower switches. Since PWM dead-time signal delays can be removed, higher device switching frequencies and higher fundamental output voltages are made possible. The proposed inverter topologies produce five-level PWM voltage waveforms between two inverter-leg terminals with a PWM frequency up to four times higher than the inverter switching frequency. This is achieved with half the number of switches used in alternative schemes. This paper uses simulated and experimental results to illustrate the operation of the proposed inverter structures.",
"title": ""
}
] |
scidocsrr
|
45e59938a83ec258ff5663a5b01a77f8
|
Control of Tendon-Driven Soft Foam Robot Hands
|
[
{
"docid": "f4abfe0bb969e2a6832fa6317742f202",
"text": "We built a highly compliant, underactuated, robust and at the same time dexterous anthropomorphic hand. We evaluate its dexterous grasping capabilities by implementing the comprehensive Feix taxonomy of human grasps and by assessing the dexterity of its opposable thumb using the Kapandji test. We also illustrate the hand’s payload limits and demonstrate its grasping capabilities in real-world grasping experiments. To support our claim that compliant structures are beneficial for dexterous grasping, we compare the dimensionality of control necessary to implement the diverse grasp postures with the dimensionality of the grasp postures themselves. We find that actuation space is smaller than posture space and explain the difference with the mechanic interaction between hand and grasped object. Additional desirable properties are derived from using soft robotics technology: the hand is robust to impact and blunt collisions, inherently safe, and not affected by dirt, dust, or liquids. Furthermore, the hand is simple and inexpensive to manufacture.",
"title": ""
},
{
"docid": "fd9411cfa035139010be0935d9e52865",
"text": "This paper presents a robotic manipulation system capable of autonomously positioning a multi-segment soft fluidic elastomer robot in three dimensions. Specifically, we present an extremely soft robotic manipulator morphology that is composed entirely from low durometer elastomer, powered by pressurized air, and designed to be both modular and durable. To understand the deformation of a single arm segment, we develop and experimentally validate a static deformation model. Then, to kinematically model the multi-segment manipulator, we use a piece-wise constant curvature assumption consistent with more traditional continuum manipulators. In addition, we define a complete fabrication process for this new manipulator and use this process to make multiple functional prototypes. In order to power the robot’s spatial actuation, a high capacity fluidic drive cylinder array is implemented, providing continuously variable, closed-circuit gas delivery. Next, using real-time data from a vision system, we develop a processing and control algorithm that generates realizable kinematic curvature trajectories and controls the manipulator’s configuration along these trajectories. Lastly, we experimentally demonstrate new capabilities offered by this soft fluidic elastomer manipulation system such as entering and advancing through confined three-dimensional environments as well as conforming to goal shape-configurations within a sagittal plane under closed-loop control.",
"title": ""
},
{
"docid": "19695936a91f2632911c9f1bee48c11d",
"text": "The purpose of this technical report is two-fold. First of all, it introduces a suite of challenging continuous control tasks (integrated with OpenAI Gym) based on currently existing robotics hardware. The tasks include pushing, sliding and pick & place with a Fetch robotic arm as well as in-hand object manipulation with a Shadow Dexterous Hand. All tasks have sparse binary rewards and follow a Multi-Goal Reinforcement Learning (RL) framework in which an agent is told what to do using an additional input. The second part of the paper presents a set of concrete research ideas for improving RL algorithms, most of which are related to Multi-Goal RL and Hindsight Experience Replay. 1 Environments All environments are released as part of OpenAI Gym1 (Brockman et al., 2016) and use the MuJoCo (Todorov et al., 2012) physics engine for fast and accurate simulation. A video presenting the new environments can be found at https://www.youtube.com/watch?v=8Np3eC_PTFo. 1.1 Fetch environments The Fetch environments are based on the 7-DoF Fetch robotics arm,2 which has a two-fingered parallel gripper. They are very similar to the tasks used in Andrychowicz et al. (2017) but we have added an additional reaching task and the pick & place task is a bit different.3 In all Fetch tasks, the goal is 3-dimensional and describes the desired position of the object (or the end-effector for reaching). Rewards are sparse and binary: The agent obtains a reward of −1 if the object is not at the target location (within a tolerance of 5 cm) and 0 otherwise. Actions are 4-dimensional: 3 dimensions specify the desired gripper movement in Cartesian coordinates and the last dimension controls opening and closing of the gripper. We apply the same action in 20 subsequent simulator steps (with ∆t = 0.002 each) before returning control to the agent, i.e. the agent’s action frequency is f = 25 Hz. Observations include the Cartesian position of the gripper, its linear velocity as well as the position and linear velocity of the robot’s gripper. If an object is present, we also include the object’s Cartesian position and rotation using Euler angles, its linear and angular velocities, as well as its position and linear velocities relative to gripper. https://github.com/openai/gym http://fetchrobotics.com/ In Andrychowicz et al. (2017) training on this task relied on starting some of the training episodes from a state in which the box is already grasped. This is not necessary for successful training if the target position of the box is sometimes in the air and sometimes on the table and we do not use this technique anymore. ar X iv :1 80 2. 09 46 4v 1 [ cs .L G ] 2 6 Fe b 20 18 Figure 1: The four proposed Fetch environments: FetchReach, FetchPush, FetchSlide, and FetchPickAndPlace. Reaching (FetchReach) The task is to move the gripper to a target position. This task is very easy to learn and is therefore a suitable benchmark to ensure that a new idea works at all.4 Pushing (FetchPush) A box is placed on a table in front of the robot and the task is to move it to a target location on the table. The robot fingers are locked to prevent grasping. The learned behavior is usually a mixture of pushing and rolling. Sliding (FetchSlide) A puck is placed on a long slippery table and the target position is outside of the robot’s reach so that it has to hit the puck with such a force that it slides and then stops at the target location due to friction. Pick & Place (FetchPickAndPlace) The task is to grasp a box and move it to the target location which may be located on the table surface or in the air above it. 1.2 Hand environments These environments are based on the Shadow Dexterous Hand,5 which is an anthropomorphic robotic hand with 24 degrees of freedom. Of those 24 joints, 20 can be can be controlled independently whereas the remaining ones are coupled joints. In all hand tasks, rewards are sparse and binary: The agent obtains a reward of −1 if the goal has been achieved (within some task-specific tolerance) and 0 otherwise. Actions are 20-dimensional: We use absolute position control for all non-coupled joints of the hand. We apply the same action in 20 subsequent simulator steps (with ∆t = 0.002 each) before returning control to the agent, i.e. the agent’s action frequency is f = 25 Hz. Observations include the 24 positions and velocities of the robot’s joints. In case of an object that is being manipulated, we also include its Cartesian position and rotation represented by a quaternion (hence 7-dimensional) as well as its linear and angular velocities. In the reaching task, we include the Cartesian position of all 5 fingertips. Reaching (HandReach) A simple task in which the goal is 15-dimensional and contains the target Cartesian position of each fingertip of the hand. Similarly to the FetchReach task, this task is relatively easy to learn. A goal is considered achieved if the mean distance between fingertips and their desired position is less than 1 cm. Block manipulation (HandManipulateBlock) In the block manipulation task, a block is placed on the palm of the hand. The task is to then manipulate the block such that a target pose is achieved. The goal is 7-dimensional and includes the target position (in Cartesian coordinates) and target rotation (in quaternions). We include multiple variants with increasing levels of difficulty: • HandManipulateBlockRotateZ Random target rotation around the z axis of the block. No target position. • HandManipulateBlockRotateParallel Random target rotation around the z axis of the block and axis-aligned target rotations for the x and y axes. No target position. • HandManipulateBlockRotateXYZ Random target rotation for all axes of the block. No target position. That being said, we have found that is so easy that even partially broken implementations sometimes learn successful policies, so no conclusions should be drawn from this task alone. https://www.shadowrobot.com/products/dexterous-hand/",
"title": ""
},
{
"docid": "9970c9a191d9223448d205f0acec6976",
"text": "This paper presents the complete development and analysis of a soft robotic platform that exhibits peristaltic locomotion. The design principle is based on the antagonistic arrangement of circular and longitudinal muscle groups of Oligochaetes. Sequential antagonistic motion is achieved in a flexible braided mesh-tube structure using a nickel titanium (NiTi) coil actuators wrapped in a spiral pattern around the circumference. An enhanced theoretical model of the NiTi coil spring describes the combination of martensite deformation and spring elasticity as a function of geometry. A numerical model of the mesh structures reveals how peristaltic actuation induces robust locomotion and details the deformation by the contraction of circumferential NiTi actuators. Several peristaltic locomotion modes are modeled, tested, and compared on the basis of speed. Utilizing additional NiTi coils placed longitudinally, steering capabilities are incorporated. Proprioceptive potentiometers sense segment contraction, which enables the development of closed-loop controllers. Several appropriate control algorithms are designed and experimentally compared based on locomotion speed and energy consumption. The entire mechanical structure is made of flexible mesh materials and can withstand significant external impact during operation. This approach allows a completely soft robotic platform by employing a flexible control unit and energy sources.",
"title": ""
}
] |
[
{
"docid": "5fe851a0bd4a152e162f9c991fb74f6f",
"text": "Input-output examples have emerged as a practical and user-friendly specification mechanism for program synthesis in many environments. While example-driven tools have demonstrated tangible impact that has inspired adoption in industry, their underlying semantics are less well-understood: what are \"examples\" and how do they relate to other kinds of specifications? This paper demonstrates that examples can, in general, be interpreted as refinement types. Seen in this light, program synthesis is the task of finding an inhabitant of such a type. This insight provides an immediate semantic interpretation for examples. Moreover, it enables us to exploit decades of research in type theory as well as its correspondence with intuitionistic logic rather than designing ad hoc theoretical frameworks for synthesis from scratch. We put this observation into practice by formalizing synthesis as proof search in a sequent calculus with intersection and union refinements that we prove to be sound with respect to a conventional type system. In addition, we show how to handle negative examples, which arise from user feedback or counterexample-guided loops. This theory serves as the basis for a prototype implementation that extends our core language to support ML-style algebraic data types and structurally inductive functions. Users can also specify synthesis goals using polymorphic refinements and import monomorphic libraries. The prototype serves as a vehicle for empirically evaluating a number of different strategies for resolving the nondeterminism of the sequent calculus---bottom-up theorem-proving, term enumeration with refinement type checking, and combinations of both---the results of which classify, explain, and validate the design choices of existing synthesis systems. It also provides a platform for measuring the practical value of a specification language that combines \"examples\" with the more general expressiveness of refinements.",
"title": ""
},
{
"docid": "7d35f3afeb9a8e1dc6f99e4d241273c7",
"text": "In this paper, we propose Motion Dense Sampling (MDS) for action recognition, which detects very informative interest points from video frames. MDS has three advantages compared to other existing methods. The first advantage is that MDS detects only interest points which belong to action regions of all regions of a video frame. The second one is that it can detect the constant number of points even when the size of action region in an image drastically changes. The Third one is that MDS enables to describe scale invariant features by computing sampling scale for each frame based on the size of action regions. Thus, our method detects much more informative interest points from videos unlike other methods. We also propose Category Clustering and Component Clustering, which generate the very effective codebook for action recognition. Experimental results show a significant improvement over existing methods on YouTube dataset. Our method achieves 87.5 % accuracy for video classification by using only one descriptor.",
"title": ""
},
{
"docid": "89cba76ab33c66a3687481ea56e1e556",
"text": "With sustained growth of software complexity, finding security vulnerabilities in operating systems has become an important necessity. Nowadays, OS are shipped with thousands of binary executables. Unfortunately, methodologies and tools for an OS scale program testing within a limited time budget are still missing.\n In this paper we present an approach that uses lightweight static and dynamic features to predict if a test case is likely to contain a software vulnerability using machine learning techniques. To show the effectiveness of our approach, we set up a large experiment to detect easily exploitable memory corruptions using 1039 Debian programs obtained from its bug tracker, collected 138,308 unique execution traces and statically explored 76,083 different subsequences of function calls. We managed to predict with reasonable accuracy which programs contained dangerous memory corruptions.\n We also developed and implemented VDiscover, a tool that uses state-of-the-art Machine Learning techniques to predict vulnerabilities in test cases. Such tool will be released as open-source to encourage the research of vulnerability discovery at a large scale, together with VDiscovery, a public dataset that collects raw analyzed data.",
"title": ""
},
{
"docid": "bb0b9b679444291bceecd68153f6f480",
"text": "Path planning is one of the most significant and challenging subjects in robot control field. In this paper, a path planning method based on an improved shuffled frog leaping algorithm is proposed. In the proposed approach, a novel updating mechanism based on the median strategy is used to avoid local optimal solution problem in the general shuffled frog leaping algorithm. Furthermore, the fitness function is modified to make the path generated by the shuffled frog leaping algorithm smoother. In each iteration, the globally best frog is obtained and its position is used to lead the movement of the robot. Finally, some simulation experiments are carried out. The experimental results show the feasibility and effectiveness of the proposed algorithm in path planning for mobile robots.",
"title": ""
},
{
"docid": "c2e7425f719dd51eec0d8e180577269e",
"text": "Most important way of communication among humans is language and primary medium used for the said is speech. The speech recognizers make use of a parametric form of a signal to obtain the most important distinguishable features of speech signal for recognition purpose. In this paper, Linear Prediction Cepstral Coefficient (LPCC), Mel Frequency Cepstral Coefficient (MFCC) and Bark frequency Cepstral coefficient (BFCC) feature extraction techniques for recognition of Hindi Isolated, Paired and Hybrid words have been studied and the corresponding recognition rates are compared. Artifical Neural Network is used as back end processor. The experimental results show that the better recognition rate is obtained for MFCC as compared to LPCC and BFCC for all the three types of words.",
"title": ""
},
{
"docid": "c92807c973f51ac56fe6db6c2bb3f405",
"text": "Machine learning relies on the availability of a vast amount of data for training. However, in reality, most data are scattered across different organizations and cannot be easily integrated under many legal and practical constraints. In this paper, we introduce a new technique and framework, known as federated transfer learning (FTL), to improve statistical models under a data federation. The federation allows knowledge to be shared without compromising user privacy, and enables complimentary knowledge to be transferred in the network. As a result, a target-domain party can build more flexible and powerful models by leveraging rich labels from a source-domain party. A secure transfer cross validation approach is also proposed to guard the FTL performance under the federation. The framework requires minimal modifications to the existing model structure and provides the same level of accuracy as the nonprivacy-preserving approach. This framework is very flexible and can be effectively adapted to various secure multi-party machine learning tasks.",
"title": ""
},
{
"docid": "6f6042046ef1c1642bb95bc47f38cdbb",
"text": "Jean-Jacques Rousseau's concepts of self-love (amour propre) and love of self (amour de soi même) are applied to the psychology of terrorism. Self-love is concern with one's image in the eyes of respected others, members of one's group. It denotes one's feeling of personal significance, the sense that one's life has meaning in accordance with the values of one's society. Love of self, in contrast, is individualistic concern with self-preservation, comfort, safety, and the survival of self and loved ones. We suggest that self-love defines a motivational force that when awakened arouses the goal of a significance quest. When a group perceives itself in conflict with dangerous detractors, its ideology may prescribe violence and terrorism against the enemy as a means of significance gain that gratifies self-love concerns. This may involve sacrificing one's self-preservation goals, encapsulated in Rousseau's concept of love of self. The foregoing notions afford the integration of diverse quantitative and qualitative findings on individuals' road to terrorism and back. Understanding the significance quest and the conditions of its constructive fulfillment may be crucial to reversing the current tide of global terrorism.",
"title": ""
},
{
"docid": "e3e9532e873739e8024ba7d55de335c3",
"text": "We present a method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection. Our method is essentially as fast as an equivalent one which selects the “support” patterns at random, yet it can outperform random selection on hard curve fitting tasks. More importantly, it leads to a sufficiently stable approximation of the log marginal likelihood of the training data, which can be optimised to adjust a large number of hyperparameters automatically. We demonstrate the model selection capabilities of the algorithm in a range of experiments. In line with the development of our method, we present a simple view on sparse approximations for GP models and their underlying assumptions and show relations to other methods.",
"title": ""
},
{
"docid": "bc6f9ef52c124675c62ccb8a1269a9b8",
"text": "We explore 3D printing physical controls whose tactile response can be manipulated programmatically through pneumatic actuation. In particular, by manipulating the internal air pressure of various pneumatic elements, we can create mechanisms that require different levels of actuation force and can also change their shape. We introduce and discuss a series of example 3D printed pneumatic controls, which demonstrate the feasibility of our approach. This includes conventional controls, such as buttons, knobs and sliders, but also extends to domains such as toys and deformable interfaces. We describe the challenges that we faced and the methods that we used to overcome some of the limitations of current 3D printing technology. We conclude with example applications and thoughts on future avenues of research.",
"title": ""
},
{
"docid": "e468fd0e6c14fee379cd1825afd018eb",
"text": "Bionic implants for the deaf require wide-dynamicrange low-power microphone preamplifiers with good wide-band rejection of the supply noise. Widely used low-cost implementations of such preamplifiers typically use the buffered voltage output of an electret capacitor with a built-in JFET source follower. We describe a design in which the JFET microphone buffer’s output current, rather than its output voltage, is transduced via a sense-amplifier topology allowing good in-band power-supply rejection. The design employs a low-frequency feedback loop to subtract the dc bias current of the microphone and prevent it from causing saturation. Wide-band power-supply rejection is achieved by integrating a novel filter on all current-source biasing. Our design exhibits 80 dB of dynamic range with less than 5 Vrms of input noise while operating from a 2.8 V supply. The power consumption is 96 W which includes 60 W for the microphone built-in buffer. The in-band power-supply rejection ratio varies from 50 to 90 dB while out-of-band supply attenuation is greater than 60 dB until 25 MHz. Fabrication was done in a 1.5m CMOS process with gain programmability for both microphone and auxiliary channel inputs.",
"title": ""
},
{
"docid": "b796a957545aa046bad14d44c4578700",
"text": "Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results.",
"title": ""
},
{
"docid": "bbedbe2d901f63e3f163ea0f24a2e2d7",
"text": "a r t i c l e i n f o a b s t r a c t The leader trait perspective is perhaps the most venerable intellectual tradition in leadership research. Despite its early prominence in leadership research, it quickly fell out of favor among leadership scholars. Thus, despite recent empirical support for the perspective, conceptual work in the area lags behind other theoretical perspectives. Accordingly, the present review attempts to place the leader trait perspective in the context of supporting intellectual traditions, including evolutionary psychology and behavioral genetics. We present a conceptual model that considers the source of leader traits, mediators and moderators of their effects on leader emergence and leadership effectiveness, and distinguish between perceived and actual leadership effectiveness. We consider both the positive and negative effects of specific \" bright side \" personality traits: the Big Five traits, core self-evaluations, intelligence, and charisma. We also consider the positive and negative effects of \" dark side \" leader traits: Narcissism, hubris, dominance, and Machiavellianism. If one sought to find singular conditions that existed across species, one might find few universals. One universal that does exist, at least those species that have brains and nervous systems, is leadership. From insects to reptiles to mammals, leadership exists as surely as collective activity exists. There is the queen bee, and there is the alpha male. Though the centrality of leadership may vary by species (it seems more important to mammals than, say, to avians and reptiles), it is fair to surmise that whenever there is social activity, a social structure develops, and one (perhaps the) defining characteristic of that structure is the emergence of a leader or leaders. The universality of leadership, however, does not deny the importance of individual differences — indeed the emergence of leadership itself is proof of individual differences. Moreover, even casual observation of animal (including human) collective behavior shows the existence of a leader. Among a herd of 100 cattle or a pride of 20 lions, one is able to detect a leadership structure (especially at times of eating, mating, and attack). One quickly wonders: What has caused this leadership structure to emerge? Why has one animal (the alpha) emerged to lead the collective? And how does this leadership cause this collective to flourish — or founder? Given these questions, it is of no surprise that the earliest conceptions of leadership focused on individual …",
"title": ""
},
{
"docid": "b32286014bb7105e62fba85a9aab9019",
"text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.",
"title": ""
},
{
"docid": "85c5746b7ead047f34cbf11c42f0890e",
"text": "Depression is a serious mental health problem affecting a significant segment of American society today, and in particular college students. In a survey by the U.S. Centers for Disease Control (CDC) in 2009, 26.1% of U.S. students nationwide reported feeling so sad or hopeless almost every day for 2 or more weeks in a row that they stopped doing some usual activities. Similar statistics are also reported in mental health studies by the American College Health Association, and by independent surveys. In this article, the author report their findings from a month-long experiment conducted at Missouri University of Science and Technology on studying depressive symptoms among college students who use the Internet. This research was carried out using real campus Internet data collected continuously, unobtrusively, and while preserving privacy.",
"title": ""
},
{
"docid": "abc160fc578bb40935afa7aea93cf6ca",
"text": "This study investigates the effect of leader and follower behavior on employee voice, team task responsibility and team effectiveness. This study distinguishes itself by including both leader and follower behavior as predictors of team effectiveness. In addition, employee voice and team task responsibility are tested as potential mediators of the relationship between task-oriented behaviors (informing, directing, verifying) and team effectiveness as well as the relationship between relation-oriented behaviors (positive feedback, intellectual stimulation, individual consideration) and team effectiveness. This cross-sectional exploratory study includes four methods: 1) inter-reliable coding of leader and follower behavior during staff meetings; 2) surveys of 57 leaders; 3) surveys of643 followers; 4) survey of 56 lean coaches. Regression analyses showed that both leaders and followers display more task-oriented behaviors opposed to relation-oriented behaviors during staff meetings. Contrary to the hypotheses, none of the observed leader behaviors positively influences employee voice, team task responsibility or team effectiveness. However, all three task-oriented follower behaviors indirectly influence team effectiveness. The findings from this research illustrate that follower behaviors has more influence on team effectiveness compared to leader behavior. Practical implications, strengths and limitations of the research are discussed. Moreover, future research directions including the mediating role of culture and psychological safety are proposed as well.",
"title": ""
},
{
"docid": "7267e5082c890dfa56a745d3b28425cc",
"text": "Natural Orifice Translumenal Endoscopic Surgery (NOTES) has recently attracted lots of attention, promising surgical procedures with fewer complications, better cosmesis, lower pains and faster recovery. Several robotic systems were developed aiming to enable abdominal surgeries in a NOTES manner. Although these robotic systems demonstrated the surgical concept, characteristics which could fully enable NOTES procedures remain unclear. This paper presents the development of an endoscopic continuum testbed for finalizing system characteristics of a surgical robot for NOTES procedures, which include i) deployability (the testbed can be deployed in a folded endoscope configuration and then be unfolded into a working configuration), ii) adequate workspace, iii) sufficient distal dexterity (e.g. suturing capability), and iv) desired mechanics properties (e.g. enough load carrying capability). Continuum mechanisms were implemented in the design and a diameter of 12mm of this testbed in its endoscope configuration was achieved. Results of this paper could be used to form design references for future development of NOTES robots.",
"title": ""
},
{
"docid": "6f049f55c1b6f65284c390bd9a2d7511",
"text": "Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve these results, they use millions of parameters to be trained. However, when targetting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods.",
"title": ""
},
{
"docid": "844dcf80b2feba89fced99a0f8cbe9bf",
"text": "Communication could potentially be an effective way for multi-agent cooperation. However, information sharing among all agents or in predefined communication architectures that existing methods adopt can be problematic. When there is a large number of agents, agents cannot differentiate valuable information that helps cooperative decision making from globally shared information. Therefore, communication barely helps, and could even impair the learning of multi-agent cooperation. Predefined communication architectures, on the other hand, restrict communication among agents and thus restrain potential cooperation. To tackle these difficulties, in this paper, we propose an attentional communication model that learns when communication is needed and how to integrate shared information for cooperative decision making. Our model leads to efficient and effective communication for large-scale multi-agent cooperation. Empirically, we show the strength of our model in a variety of cooperative scenarios, where agents are able to develop more coordinated and sophisticated strategies than existing methods.",
"title": ""
},
{
"docid": "d514fdfa92b4aba95922f9b200d71b5a",
"text": "In space-borne applications, reduction of size, weight, and power can be critical. Pursuant to this goal, we present an ultrawideband, tightly coupled dipole array (TCDA) capable of supporting numerous satellite communication bands, simultaneously. Such antennas enable weight reduction by replacing multiple antennas. In addition, it provides spectral efficiency by reusing intermediate frequencies for inter-satellite communication. For ease of fabrication, the array is initially designed for operation across the UHF, L, S, and lower-C bands (0.6-3.6 GHz), with emphasis on dual-linear polarization, and wide-angle scanning. The array achieves a minimum 6:1 bandwidth for VSWR less than 1.8, 2.4, and 3.1 for 0°, 45°, and 60° scans, respectively. The presented design represents the first practical realization of dual polarizations using a TCDA topology. This is accomplished through a dual-offset, split unit cell with minimized inter-feed coupling. Array simulations are verified with measured results of an 8 × 8 prototype, exhibiting very low cross polarization and near-theoretical gain across the band. Further, we present a TCDA design operating across the upper-S, C, X, and Ku bands (3-18 GHz). The array achieves this 6:1 bandwidth for VSWR <; 2 at broadside, and VSWR <; 2.6 at 45°. A discussion on design and fabrication for low-cost arrays operating at these frequencies is included.",
"title": ""
},
{
"docid": "a6e71e4be58c51b580fcf08e9d1a100a",
"text": "This dissertation is concerned with the processing of high velocity text using event processing means. It comprises a scientific approach for combining the area of information filtering and event processing, in order to analyse fast and voluminous streams of text. In order to be able to process text streams within event driven means, an event reference model was developed that allows for the conversion of unstructured or semi-structured text streams into discrete event types on which event processing engines can operate. Additionally, a set of essential reference processes in the domain of information filtering and text stream analysis were described using eventdriven concepts. In a second step, a reference architecture was designed that described essential architectural components required for the design of information filtering and text stream analysis systems in an event-driven manner. Further to this, a set of architectural patterns for building event driven text analysis systems was derived that support the design and implementation of such systems. Subsequently, a prototype was built using the theoretic foundations. This system was initially used to study the effect of sliding window sizes on the properties of dynamic sub-corpora. It could be shown that small sliding window based corpora are similar to larger sliding windows and thus can be used as a resource-saving alternative. Next, a study of several linguistic aspects of text streams was undertaken that showed that event stream summary statistics can provide interesting insights into the characteristics of high velocity text streams. Finally, four essential information filtering and text stream analysis components were studied, viz. filter policies, term weighting, thresholds and query expansion. These were studied using three temporal search profile types and were evaluated using standard performance measures. The goal was to study the efficiency of traditional as well as new algorithms within the given context of high velocity text stream data, in order to provide advise which methods work best. The results of this dissertation are intended to provide software architects and developers with valuable information for the design and implementation of event-driven text stream analysis systems.",
"title": ""
}
] |
scidocsrr
|
cdc88017485cf6108e8f95430ac27316
|
High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks
|
[
{
"docid": "4eb1636ff952677938114bcf2d81a636",
"text": "A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.",
"title": ""
},
{
"docid": "adfe05c7e0cebf76c3f6cf7f84c7523e",
"text": "Mass detection from mammograms plays a crucial role as a pre- processing stage for mass segmentation and classification. The detection of masses from mammograms is considered to be a challenging problem due to their large variation in shape, size, boundary and texture and also because of their low signal to noise ratio compared to the surrounding breast tissue. In this paper, we present a novel approach for detecting masses in mammograms using a cascade of deep learning and random forest classifiers. The first stage classifier consists of a multi-scale deep belief network that selects suspicious regions to be further processed by a two-level cascade of deep convolutional neural networks. The regions that survive this deep learning analysis are then processed by a two-level cascade of random forest classifiers that use morphological and texture features extracted from regions selected along the cascade. Finally, regions that survive the cascade of random forest classifiers are combined using connected component analysis to produce state-of-the-art results. We also show that the proposed cascade of deep learning and random forest classifiers are effective in the reduction of false positive regions, while maintaining a high true positive detection rate. We tested our mass detection system on two publicly available datasets: DDSM-BCRP and INbreast. The final mass detection produced by our approach achieves the best results on these publicly available datasets with a true positive rate of 0.96 ± 0.03 at 1.2 false positive per image on INbreast and true positive rate of 0.75 at 4.8 false positive per image on DDSM-BCRP.",
"title": ""
}
] |
[
{
"docid": "f944f5e334a127cd50ab3ec0d3c2b603",
"text": "First-order methods play a central role in large-scale machine learning. Even though many variations exist, each suited to a particular problem, almost all such methods fundamentally rely on two types of algorithmic steps: gradient descent, which yields primal progress, and mirror descent, which yields dual progress. We observe that the performances of gradient and mirror descent are complementary, so that faster algorithms can be designed by linearly coupling the two. We show how to reconstruct Nesterov’s accelerated gradient methods using linear coupling, which gives a cleaner interpretation than Nesterov’s original proofs. We also discuss the power of linear coupling by extending it to many other settings that Nesterov’s methods cannot apply to. 1998 ACM Subject Classification G.1.6 Optimization, F.2 Analysis of Algorithms and Problem Complexity",
"title": ""
},
{
"docid": "6fee1cce864d858af6e28959961f5c24",
"text": "Much of the organic light emitting diode (OLED) characterization published to date addresses the high current regime encountered in the operation of passively addressed displays. Higher efficiency and brightness can be obtained by driving with an active matrix, but the lower instantaneous pixel currents place the OLEDs in a completely different operating mode. Results at these low current levels are presented and their impact on active matrix display design is discussed.",
"title": ""
},
{
"docid": "ad967df5257c81c12f4869ed6a30cbe5",
"text": "Tensor decompositions and tensor networks are emerging and promising tools for data analysis and data mining. In this paper we review basic and emerging models and associated algorithms for large-scale tensor networks, especially Tensor Train (TT) decompositions using novel mathematical and graphical representations. We discus the concept of tensorization (i.e., creating very high-order tensors from lower-order original data) and super compression of data achieved via quantized tensor train (QTT) networks. The main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems (that are far from tractable by classical numerical methods) by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions.",
"title": ""
},
{
"docid": "b1313b777c940445eb540b1e12fa559e",
"text": "In this paper we explore the correlation between the sound of words and their meaning, by testing if the polarity (‘good guy’ or ‘bad guy’) of a character’s role in a work of fiction can be predicted by the name of the character in the absence of any other context. Our approach is based on phonological and other features proposed in prior theoretical studies of fictional names. These features are used to construct a predictive model over a manually annotated corpus of characters from motion pictures. By experimenting with different mixtures of features, we identify phonological features as being the most discriminative by comparison to social and other types of features, and we delve into a discussion of specific phonological and phonotactic indicators of a character’s role’s polarity.",
"title": ""
},
{
"docid": "c4b5c4c94faa6e77486a95457cdf502f",
"text": "In this paper, we implement an optical fiber communication system as an end-to-end deep neural network, including the complete chain of transmitter, channel model, and receiver. This approach enables the optimization of the transceiver in a single end-to-end process. We illustrate the benefits of this method by applying it to intensity modulation/direct detection (IM/DD) systems and show that we can achieve bit error rates below the 6.7% hard-decision forward error correction (HD-FEC) threshold. We model all componentry of the transmitter and receiver, as well as the fiber channel, and apply deep learning to find transmitter and receiver configurations minimizing the symbol error rate. We propose and verify in simulations a training method that yields robust and flexible transceivers that allow—without reconfiguration—reliable transmission over a large range of link dispersions. The results from end-to-end deep learning are successfully verified for the first time in an experiment. In particular, we achieve information rates of 42 Gb/s below the HD-FEC threshold at distances beyond 40 km. We find that our results outperform conventional IM/DD solutions based on two- and four-level pulse amplitude modulation with feedforward equalization at the receiver. Our study is the first step toward end-to-end deep learning based optimization of optical fiber communication systems.",
"title": ""
},
{
"docid": "1571e5a85d837a0878362473aadce808",
"text": "Image or video exchange over the Internet of Things (IoT) is a requirement in diverse applications, including smart health care, smart structures, and smart transportations. This paper presents a modular and extensible quadrotor architecture and its specific prototyping for automatic tracking applications. The architecture is extensible and based on off-the-shelf components for easy system prototyping. A target tracking and acquisition application is presented in detail to demonstrate the power and flexibility of the proposed design. Complete design details of the platform are also presented. The designed module implements the basic proportional-integral-derivative control and a custom target acquisition algorithm. Details of the sliding-window-based algorithm are also presented. This algorithm performs $20\\times $ faster than comparable approaches in OpenCV with equal accuracy. Additional modules can be integrated for more complex applications, such as search-and-rescue, automatic object tracking, and traffic congestion analysis. A hardware architecture for the newly introduced Better Portable Graphics (BPG) compression algorithm is also introduced in the framework of the extensible quadrotor architecture. Since its introduction in 1987, the Joint Photographic Experts Group (JPEG) graphics format has been the de facto choice for image compression. However, the new compression technique BPG outperforms the JPEG in terms of compression quality and size of the compressed file. The objective is to present a hardware architecture for enhanced real-time compression of the image. Finally, a prototyping platform of a hardware architecture for a secure digital camera (SDC) integrated with the secure BPG (SBPG) compression algorithm is presented. The proposed architecture is suitable for high-performance imaging in the IoT and is prototyped in Simulink. To the best of our knowledge, this is the first ever proposed hardware architecture for SBPG compression integrated with an SDC.",
"title": ""
},
{
"docid": "93a6f6aad7d60f7f843fa5d979ee917a",
"text": "A novel connected printed inverted-F antenna (PIFA) array with multiple-input multiple-output (MIMO) configuration is presented. The array is designed to operate at the 28 GHz band for 5G mobile applications. It consists of 4-element MIMO antenna system where each element includes 8 PIFA antennas forming a connected antenna array. A 1×8 power divider/combiner in the form of tapered T-junction is used to excite that array. The array operates at 28 GHz with 10 dB bandwidth of 1 GHz from 27.5 to 28.5 GHz. The proposed design is modeled on RO-4350B with the height of 0.76 mm and dielectric constant of 3.48. The dimensions of the substrate are 130×68×0.76 mm3, which matches the dimensions of modern smart phones. The proposed design is simple in structure, low profile and has a compact size that is suitable for mobile handsets.",
"title": ""
},
{
"docid": "52a4af83304ad0a5fe3a77dfdfdabb6a",
"text": "Discovering semantic coherent topics from the large amount of user-generated content (UGC) in social media would facilitate many downstream applications of intelligent computing. Topic models, as one of the most powerful algorithms, have been widely used to discover the latent semantic patterns in text collections. However, one key weakness of topic models is that they need documents with certain length to provide reliable statistics for generating coherent topics. In Twitter, the users’ tweets are mostly short and noisy. Observations of word co-occurrences are incomprehensible for topic models. To deal with this problem, previous work tried to incorporate prior knowledge to obtain better results. However, this strategy is not practical for the fast evolving UGC in Twitter. In this paper, we first cluster the users according to the retweet network, and the users’ interests are mined as the prior knowledge. Such data are then applied to improve the performance of topic learning. The potential cause for the effectiveness of this approach is that users in the same community usually share similar interests, which will result in less noisy sub-data sets. Our algorithm pre-learns two types of interest knowledge from the data set: the interest-word-sets and a tweet-interest preference matrix. Furthermore, a dedicated background model is introduced to judge whether a word is drawn from the background noise. Experiments on two real life twitter data sets show that our model achieves significant improvements over state-of-the-art baselines.",
"title": ""
},
{
"docid": "7b7a0b0b6a36789834c321d04c2e2f8f",
"text": "In the present paper we propose and evaluate a framework for detection and classification of plant leaf/stem diseases using image processing and neural network technique. The images of plant leaves affected by four types of diseases namely early blight, late blight, powdery-mildew and septoria has been considered for study and evaluation of feasibility of the proposed method. The color transformation structures were obtained by converting images from RGB to HSI color space. The Kmeans clustering algorithm was used to divide images into clusters for demarcation of infected area of the leaves. After clustering, the set of color and texture features viz. moment, mean, variance, contrast, correlation and entropy were extracted based on Color Co-occurrence Method (CCM). A feed forward back propagation neural network was configured and trained using extracted set of features and subsequently utilized for detection of leaf diseases. Keyword: Color Co-Occurrence Method, K-Means, Feed Forward Neural Network",
"title": ""
},
{
"docid": "0cc25de8ea70fe1fd85824e8f3155bf7",
"text": "When integrating information from multiple websites, the same data objects can exist in inconsistent text formats across sites, making it difficult to identify matching objects using exact text match. We have developed an object identification system called Active Atlas, which compares the objects’ shared attributes in order to identify matching objects. Certain attributes are more important for deciding if a mapping should exist between two objects. Previous methods of object identification have required manual construction of object identification rules or mapping rules for determining the mappings between objects. This manual process is time consuming and error-prone. In our approach, Active Atlas learns to tailor mapping rules, through limited user input, to a specific application domain. The experimental results demonstrate that we achieve higher accuracy and require less user involvement than previous methods across various application domains.",
"title": ""
},
{
"docid": "09ae8e02304fb3a179d343ab7f20c6cb",
"text": "Object-based point cloud analysis (OBPA) is useful for information extraction from airborne LiDAR point clouds. An object-based classification method is proposed for classifying the airborne LiDAR point clouds in urban areas herein. In the process of classification, the surface growing algorithm is employed to make clustering of the point clouds without outliers, thirteen features of the geometry, radiometry, topology and echo characteristics are calculated, a support vector machine (SVM) is utilized to classify the segments, and connected component analysis for 3D point clouds is proposed to optimize the original classification results. Three datasets with different point densities and complexities are employed to test our method. Experiments suggest that the proposed method is capable of making a classification of the urban point clouds with the overall classification accuracy larger than 92.34% and the Kappa coefficient larger than 0.8638, and the classification accuracy is promoted with the increasing of the point density, which is meaningful for various types of applications. Keyword: airborne LiDAR; object-based classification; point clouds; segmentation; SVM OPEN ACCESS Remote Sens. 2013, 5 3750",
"title": ""
},
{
"docid": "f6463026a75a981c22e00a98990a095a",
"text": "Thanks to their anonymity (pseudonymity) and elimination of trusted intermediaries, cryptocurrencies such as Bitcoin have created or stimulated growth in many businesses and communities. Unfortunately, some of these are criminal, e.g., money laundering, illicit marketplaces, and ransomware. Next-generation cryptocurrencies such as Ethereum will include rich scripting languages in support of smart contracts, programs that autonomously intermediate transactions. In this paper, we explore the risk of smart contracts fueling new criminal ecosystems. Specifically, we show how what we call criminal smart contracts (CSCs) can facilitate leakage of confidential information, theft of cryptographic keys, and various real-world crimes (murder, arson, terrorism).\n We show that CSCs for leakage of secrets (a la Wikileaks) are efficiently realizable in existing scripting languages such as that in Ethereum. We show that CSCs for theft of cryptographic keys can be achieved using primitives, such as Succinct Non-interactive ARguments of Knowledge (SNARKs), that are already expressible in these languages and for which efficient supporting language extensions are anticipated. We show similarly that authenticated data feeds, an emerging feature of smart contract systems, can facilitate CSCs for real-world crimes (e.g., property crimes).\n Our results highlight the urgency of creating policy and technical safeguards against CSCs in order to realize the promise of smart contracts for beneficial goals.",
"title": ""
},
{
"docid": "bd20bbe7deb2383b6253ec3f576dcf56",
"text": "Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. We develop a new generative model called Generative Matching Network which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks. By conditioning on the additional input dataset, our model can instantly learn new concepts that were not available in the training data but conform to a similar generative process. The proposed framework does not explicitly restrict diversity of the conditioning data and also does not require an extensive inference procedure for training or adaptation. Our experiments on the Omniglot dataset demonstrate that Generative Matching Networks significantly improve predictive performance on the fly as more additional data is available and outperform existing state of the art conditional generative models.",
"title": ""
},
{
"docid": "5d0c211333bd484e29c602b4996d1292",
"text": "Humans tend to organize perceived information into hierarchies and structures, a principle that also applies to music. Even musically untrained listeners unconsciously analyze and segment music with regard to various musical aspects, for example, identifying recurrent themes or detecting temporal boundaries between contrasting musical parts. This paper gives an overview of state-of-theart methods for computational music structure analysis, where the general goal is to divide an audio recording into temporal segments corresponding to musical parts and to group these segments into musically meaningful categories. There are many different criteria for segmenting and structuring music audio. In particular, one can identify three conceptually different approaches, which we refer to as repetition-based, novelty-based, and homogeneitybased approaches. Furthermore, one has to account for different musical dimensions such as melody, harmony, rhythm, and timbre. In our state-of-the-art report, we address these different issues in the context of music structure analysis, while discussing and categorizing the most relevant and recent articles in this field.",
"title": ""
},
{
"docid": "38984b625ac24137b23444f4bd53a312",
"text": "Presence Volume /, Number 3. Summer / 992 Reprinted from Espacios 23-24, 1955. © / 992 The Massachusetts Institute of Technology Pandemonium reigns supreme in the film industry. Ever)' studio is hastily converting to its own \"revolutionär)'\" system—Cinerama, Colorama, Panoramic Screen, Cinemascope, Three-D, and Stereophonic Sound. A dozen marquees in Time Square are luring customers into the realm of a \"sensational new experience.\" Everywhere we see the \"initiated\" holding pencils before the winked eyes of the \"uninitiated\" explaining the mysteries of 3-D. The critics are lining up pro and con concluding their articles profoundly with \"after all, it's the story that counts.\" Along with other filmgoers desiring orientation, I have been reading these articles and have sadly discovered that they reflect this confusion rather than illuminate it. It is apparent that the inability to cope with the problem stems from a refusal to adopt a wider frame of reference, and from a meager understanding of the place art has in life generally. All living things engage, on a higher or lower level, in a continuous cycle of orientation and action. For example, an animal on a mountain ledge hears a rumbling sound and sees an avalanche of rocks descending on it. It cries with",
"title": ""
},
{
"docid": "4dca4af3b49056b6ab46749f0144a2cd",
"text": "Pie menus are a well-known technique for interacting with 2D environments and so far a large body of research documents their usage and optimizations. Yet, comparatively little research has been done on the usability of pie menus in immersive virtual environments (IVEs). In this paper we reduce this gap by presenting an implementation and evaluation of an extended hierarchical pie menu system for IVEs that can be operated with a six-degrees-of-freedom input device. Following an iterative development process, we first developed and evaluated a basic hierarchical pie menu system. To better understand how pie menus should be operated in IVEs, we tested this system in a pilot user study with 24 participants and focus on item selection. Regarding the results of the study, the system was tweaked and elements like check boxes, sliders, and color map editors were added to provide extended functionality. An expert review with five experts was performed with the extended pie menus being integrated into an existing VR application to identify potential design issues. Overall results indicated high performance and efficient design.",
"title": ""
},
{
"docid": "89f9977579e3924d4de16ec097b57e25",
"text": "Sixteen patients with corrosive acid ingestion were studied. The majority of patients (n = 10) had ingested sulphuric acid, and three other patients had ingested hydrochloric acid. The extent and severity of upper gastrointestinal tract injury was determined by fibreoptic endoscopy and necropsy. All the patients had oesophageal and gastric involvement but the duodenum was spared in the majority. The injury was not considered as mild (grade I) in any of these patients; five patients having moderate (grade II) and 10 patients having severe (grade III) injury. Complications and mortality occurred only in patients with grade III injury. Feeding jejunostomy for nutritional support was used in five patients (all grade III) with good results.",
"title": ""
},
{
"docid": "1f95e7fcd4717429259aa4b9581cf308",
"text": "This project is mainly focused to develop system for animal researchers & wild life photographers to overcome so many challenges in their day life today. When they engage in such situation, they need to be patiently waiting for long hours, maybe several days in whatever location and under severe weather conditions until capturing what they are interested in. Also there is a big demand for rare wild life photo graphs. The proposed method makes the task automatically use microcontroller controlled camera, image processing and machine learning techniques. First with the aid of microcontroller and four passive IR sensors system will automatically detect the presence of animal and rotate the camera toward that direction. Then the motion detection algorithm will get the animal into middle of the frame and capture by high end auto focus web cam. Then the captured images send to the PC and are compared with photograph database to check whether the animal is exactly the same as the photographer choice. If that captured animal is the exactly one who need to capture then it will automatically capture more. Though there are several technologies available none of these are capable of recognizing what it captures. There is no detection of animal presence in different angles. Most of available equipment uses a set of PIR sensors and whatever it disturbs the IR field will automatically be captured and stored. Night time images are black and white and have less details and clarity due to infrared flash quality. If the infrared flash is designed for best image quality, range will be sacrificed. The photographer might be interested in a specific animal but there is no facility to recognize automatically whether captured animal is the photographer’s choice or not.",
"title": ""
},
{
"docid": "2e384001b105d0b3ace839051cdddf88",
"text": "Conformal prediction is a relatively new framework in which the predictive models output sets of predictions with a bound on the error rate, i.e., in a classification context, the probability of excluding the correct class label is lower than a predefined significance level. An investigation of the use of decision trees within the conformal prediction framework is presented, with the overall purpose to determine the effect of different algorithmic choices, including split criterion, pruning scheme and way to calculate the probability estimates. Since the error rate is bounded by the framework, the most important property of conformal predictors is efficiency, which concerns minimizing the number of elements in the output prediction sets. Results from one of the largest empirical investigations to date within the conformal prediction framework are presented, showing that in order to optimize efficiency, the decision trees should be induced using no pruning and with smoothed probability estimates. The choice of split criterion to use for the actual induction of the trees did not turn out to have any major impact on the efficiency. Finally, the experimentation also showed that when using decision trees, standard inductive conformal prediction was as efficient as the recently suggested method cross-conformal prediction. This is an encouraging results since cross-conformal prediction uses several decision trees, thus sacrificing the interpretability of a single decision tree.",
"title": ""
},
{
"docid": "ef24d571dc9a7a3fae2904b8674799e9",
"text": "With computers and the Internet being essential in everyday life, malware poses serious and evolving threats to their security, making the detection of malware of utmost concern. Accordingly, there have been many researches on intelligent malware detection by applying data mining and machine learning techniques. Though great results have been achieved with these methods, most of them are built on shallow learning architectures. Due to its superior ability in feature learning through multilayer deep architecture, deep learning is starting to be leveraged in industrial and academic research for different applications. In this paper, based on the Windows application programming interface calls extracted from the portable executable files, we study how a deep learning architecture can be designed for intelligent malware detection. We propose a heterogeneous deep learning framework composed of an AutoEncoder stacked up with multilayer restricted Boltzmann machines and a layer of associative memory to detect newly unknown malware. The proposed deep learning model performs as a greedy layer-wise training operation for unsupervised feature learning, followed by supervised parameter fine-tuning. Different from the existing works which only made use of the files with class labels (either malicious or benign) during the training phase, we utilize both labeled and unlabeled file samples to pre-train multiple layers in the heterogeneous deep learning framework from bottom to up for feature learning. A comprehensive experimental study on a real and large file collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed deep learning framework can further improve the overall performance in malware detection compared with traditional shallow learning methods, deep learning methods with homogeneous framework, and other existing anti-malware scanners. The proposed heterogeneous deep learning framework can also be readily applied to other malware detection tasks.",
"title": ""
}
] |
scidocsrr
|
d603cce8a4de260416da2690c9c53227
|
Filter Bank Common Spatial Pattern (FBCSP) algorithm using online adaptive and semi-supervised learning
|
[
{
"docid": "867d6a1aa9699ba7178695c45a10d23e",
"text": "A study of different on-line adaptive classifiers, using various feature types is presented. Motor imagery brain computer interface (BCI) experiments were carried out with 18 naive able-bodied subjects. Experiments were done with three two-class, cue-based, electroencephalogram (EEG)-based systems. Two continuously adaptive classifiers were tested: adaptive quadratic and linear discriminant analysis. Three feature types were analyzed, adaptive autoregressive parameters, logarithmic band power estimates and the concatenation of both. Results show that all systems are stable and that the concatenation of features with continuously adaptive linear discriminant analysis classifier is the best choice of all. Also, a comparison of the latter with a discontinuously updated linear discriminant analysis, carried out in on-line experiments with six subjects, showed that on-line adaptation performed significantly better than a discontinuous update. Finally a static subject-specific baseline was also provided and used to compare performance measurements of both types of adaptation",
"title": ""
},
{
"docid": "1b3b2b8872d3b846120502a7a40e03d0",
"text": "A viable fully on-line adaptive brain computer interface (BCI) is introduced. On-line experiments with nine naive and able-bodied subjects were carried out using a continuously adaptive BCI system. The data were analyzed and the viability of the system was studied. The BCI was based on motor imagery, the feature extraction was performed with an adaptive autoregressive model and the classifier used was an adaptive quadratic discriminant analysis. The classifier was on-line updated by an adaptive estimation of the information matrix (ADIM). The system was also able to provide continuous feedback to the subject. The success of the feedback was studied analyzing the error rate and mutual information of each session and this analysis showed a clear improvement of the subject's control of the BCI from session to session.",
"title": ""
}
] |
[
{
"docid": "c47c1e991cd090c7e92ae61419ca823b",
"text": "In recent years many tone mapping operators (TMOs) have been presented in order to display high dynamic range images (HDRI) on typical display devices. TMOs compress the luminance range while trying to maintain contrast. The inverse of tone mapping, inverse tone mapping, expands a low dynamic range image (LDRI) into an HDRI. HDRIs contain a broader range of physical values that can be perceived by the human visual system. We propose a new framework that approximates a solution to this problem. Our framework uses importance sampling of light sources to find the areas considered to be of high luminance and subsequently applies density estimation to generate an expand map in order to extend the range in the high luminance areas using an inverse tone mapping operator. The majority of today’s media is stored in the low dynamic range. Inverse tone mapping operators (iTMOs) could thus potentially revive all of this content for use in high dynamic range display and image based lighting (IBL). Moreover, we show another application that benefits quick capture of HDRIs for use in IBL.",
"title": ""
},
{
"docid": "5314dc130e963288d181ad6d6d0e6434",
"text": "Compressive sensing (CS) is an emerging field that provides a framework for image recovery using sub-Nyquist sampling rates. The CS theory shows that a signal can be reconstructed from a small set of random projections, provided that the signal is sparse in some basis, e.g., wavelets. In this paper, we describe a method to directly recover background subtracted images using CS and discuss its applications in some communication constrained multi-camera computer vision problems. We show how to apply the CS theory to recover object silhouettes (binary background subtracted images) when the objects of interest occupy a small portion of the camera view, i.e., when they are sparse in the spatial domain. We cast the background subtraction as a sparse approximation problem and provide different solutions based on convex optimization and total variation. In our method, as opposed to learning the background, we learn and adapt a low dimensional compressed representation of it, which is sufficient to determine spatial innovations; object silhouettes are then estimated directly using the compressive samples without any auxiliary image reconstruction. We also discuss simultaneous appearance recovery of the objects using compressive measurements. In this case, we show that it may be necessary to reconstruct one auxiliary image. To demonstrate the performance of the proposed algorithm, we provide results on data captured using a compressive single-pixel camera. We also illustrate that our approach is suitable for image coding in communication constrained problems by using data captured by multiple conventional cameras to provide 2D tracking and 3D shape reconstruction results with compressive measurements.",
"title": ""
},
{
"docid": "fd22f81af03d9dbcd746ebdfed5277c6",
"text": "Numerous NLP applications rely on search-engine queries, both to extract information from and to compute statistics over the Web corpus. But search engines often limit the number of available queries. As a result, query-intensive NLP applications such as Information Extraction (IE) distribute their query load over several days, making IE a slow, offline process. This paper introduces a novel architecture for IE that obviates queries to commercial search engines. The architecture is embodied in a system called KNOWITNOW that performs high-precision IE in minutes instead of days. We compare KNOWITNOW experimentally with the previouslypublished KNOWITALL system, and quantify the tradeoff between recall and speed. KNOWITNOW’s extraction rate is two to three orders of magnitude higher than KNOWITALL’s. 1 Background and Motivation Numerous modern NLP applications use the Web as their corpus and rely on queries to commercial search engines to support their computation (Turney, 2001; Etzioni et al., 2005; Brill et al., 2001). Search engines are extremely helpful for several linguistic tasks, such as computing usage statistics or finding a subset of web documents to analyze in depth; however, these engines were not designed as building blocks for NLP applications. As a result, the applications are forced to issue literally millions of queries to search engines, which limits the speed, scope, and scalability of the applications. Further, the applications must often then fetch some web documents, which at scale can be very time-consuming. In response to heavy programmatic search engine use, Google has created the “Google API” to shunt programmatic queries away from Google.com and has placed hard quotas on the number of daily queries a program can issue to the API. Other search engines have also introduced mechanisms to limit programmatic queries, forcing applications to introduce “courtesy waits” between queries and to limit the number of queries they issue. To understand these efficiency problems in more detail, consider the KNOWITALL information extraction system (Etzioni et al., 2005). KNOWITALL has a generateand-test architecture that extracts information in two stages. First, KNOWITALL utilizes a small set of domainindependent extraction patterns to generate candidate facts (cf. (Hearst, 1992)). For example, the generic pattern “NP1 such as NPList2” indicates that the head of each simple noun phrase (NP) in NPList2 is a member of the class named in NP1. By instantiating the pattern for class City, KNOWITALL extracts three candidate cities from the sentence: “We provide tours to cities such as Paris, London, and Berlin.” Note that it must also fetch each document that contains a potential candidate. Next, extending the PMI-IR algorithm (Turney, 2001), KNOWITALL automatically tests the plausibility of the candidate facts it extracts using pointwise mutual information (PMI) statistics computed from search-engine hit counts. For example, to assess the likelihood that “Yakima” is a city, KNOWITALL will compute the PMI between Yakima and a set of k discriminator phrases that tend to have high mutual information with city names (e.g., the simple phrase “city”). Thus, KNOWITALL requires at least k search-engine queries for every candidate extraction it assesses. Due to KNOWITALL’s dependence on search-engine queries, large-scale experiments utilizing KNOWITALL take days and even weeks to complete, which makes research using KNOWITALL slow and cumbersome. Private access to Google-scale infrastructure would provide sufficient access to search queries, but at prohibitive cost, and the problem of fetching documents (even if from a cached copy) would remain (as we discuss in Section 2.1). Is there a feasible alternative Web-based IE system? If so, what size Web index and how many machines are required to achieve reasonable levels of precision/recall? What would the architecture of this IE system look like, and how fast would it run? To address these questions, this paper introduces a novel architecture for web information extraction. It consists of two components that supplant the generateand-test mechanisms in KNOWITALL. To generate extractions rapidly we utilize our own specialized search engine, called the Bindings Engine (or BE), which efficiently returns bindings in response to variabilized queries. For example, in response to the query “Cities such as ProperNoun(Head(〈NounPhrase〉))”, BE will return a list of proper nouns likely to be city names. To assess these extractions, we use URNS, a combinatorial model, which estimates the probability that each extraction is correct without using any additional search engine queries.1 For further efficiency, we introduce an approximation to URNS, based on frequency of extractions’ occurrence in the output of BE, and show that it achieves comparable precision/recall to URNS. Our contributions are as follows: 1. We present a novel architecture for Information Extraction (IE), embodied in the KNOWITNOW system, which does not depend on Web search-engine queries. 2. We demonstrate experimentally that KNOWITNOW is the first system able to extract tens of thousands of facts from the Web in minutes instead of days. 3. We show that KNOWITNOW’s extraction rate is two to three orders of magnitude greater than KNOWITALL’s, but this increased efficiency comes at the cost of reduced recall. We quantify this tradeoff for KNOWITNOW’s 60,000,000 page index and extrapolate how the tradeoff would change with larger indices. Our recent work has described the BE search engine in detail (Cafarella and Etzioni, 2005), and also analyzed the URNS model’s ability to compute accurate probability estimates for extractions (Downey et al., 2005). However, this is the first paper to investigate the composition of these components to create a fast IE system, and to compare it experimentally to KNOWITALL in terms of time, In contrast, PMI-IR, which is built into KNOWITALL, requires multiple search engine queries to assess each potential extraction. recall, precision, and extraction rate. The frequencybased approximation to URNS and the demonstration of its success are also new. The remainder of the paper is organized as follows. Section 2 provides an overview of BE’s design. Section 3 describes the URNS model and introduces an efficient approximation to URNS that achieves similar precision/recall. Section 4 presents experimental results. We conclude with related and future work in Sections 5 and 6. 2 The Bindings Engine This section explains how relying on standard search engines leads to a bottleneck for NLP applications, and provides a brief overview of the Bindings Engine (BE)—our solution to this problem. A comprehensive description of BE appears in (Cafarella and Etzioni, 2005). Standard search engines are computationally expensive for IE and other NLP tasks. IE systems issue multiple queries, downloading all pages that potentially match an extraction rule, and performing expensive processing on each page. For example, such systems operate roughly as follows on the query (“cities such as 〈NounPhrase〉”): 1. Perform a traditional search engine query to find all URLs containing the non-variable terms (e.g., “cities such as”) 2. For each such URL: (a) obtain the document contents, (b) find the searched-for terms (“cities such as”) in the document text, (c) run the noun phrase recognizer to determine whether text following “cities such as” satisfies the linguistic type requirement, (d) and if so, return the string We can divide the algorithm into two stages: obtaining the list of URLs from a search engine, and then processing them to find the 〈NounPhrase〉 bindings. Each stage poses its own scalability and speed challenges. The first stage makes a query to a commercial search engine; while the number of available queries may be limited, a single one executes relatively quickly. The second stage fetches a large number of documents, each fetch likely resulting in a random disk seek; this stage executes slowly. Naturally, this disk access is slow regardless of whether it happens on a locally-cached copy or on a remote document server. The observation that the second stage is slow, even if it is executed locally, is important because it shows that merely operating a “private” search engine does not solve the problem (see Section 2.1). The Bindings Engine supports queries containing typed variables (such as NounPhrase) and string-processing functions (such as “head(X)” or “ProperNoun(X)”) as well as standard query terms. BE processes a variable by returning every possible string in the corpus that has a matching type, and that can be substituted for the variable and still satisfy the user’s query. If there are multiple variables in a query, then all of them must simultaneously have valid substitutions. (So, for example, the query “<NounPhrase> is located in <NounPhrase>” only returns strings when noun phrases are found on both sides of “is located in”.) We call a string that meets these requirements a binding for the variable in question. These queries, and the bindings they elicit, can usefully serve as part of an information extraction system or other common NLP tasks (such as gathering usage statistics). Figure 1 illustrates some of the queries that BE can handle. president Bush <Verb> cities such as ProperNoun(Head(<NounPhrase>)) <NounPhrase> is the CEO of <NounPhrase> Figure 1: Examples of queries that can be handled by BE. Queries that include typed variables and stringprocessing functions allow NLP tasks to be done efficiently without downloading the original document during query processing. BE’s novel neighborhood index enables it to process these queries with O(k) random disk seeks and O(k) serial disk reads, where k is the number of non-variable terms in its query. As a result, BE can yield orders of magnitude speedup as shown in the asymptotic analysis later in this section. The neighborhood index is an augme",
"title": ""
},
{
"docid": "1b9778fd4238c4d562b01b875d2f72de",
"text": "In this paper a stain sensor to measure large strain (80%) in textiles is presented. It consists of a mixture of 50wt-% thermoplastic elastomer (TPE) and 50wt-% carbon black particles and is fiber-shaped with a diameter of 0.315mm. The attachment of the sensor to the textile is realized using a silicone film. This sensor configuration was characterized using a strain tester and measuring the resistance (extension-retraction cycles): It showed a linear resistance response to strain, a small hysteresis, no ageing effects and a small dependance on the strain velocity. The total mean error caused by all these effects was +/-5.5% in strain. Washing several times in a conventional washing machine did not influence the sensor properties. The paper finishes by showing an example application where 21 strain sensors were integrated into a catsuit. With this garment, 27 upper body postures could be recognized with an accuracy of 97%.",
"title": ""
},
{
"docid": "265884122a08918e6d271b4cea3a455d",
"text": "This study exploits the CMOS-MEMS technology to demonstrate a condenser microphone without back-plate. The reference sensing electrodes are fixed to the substrate, and thus no back-plate is required. To reduce the unwanted deformations resulted from the thin-film residual-stresses and temperature variation for the suspended CMOS-MEMS structures, the suspended acoustic diaphragm and sensing electrodes are respectively formed by the pure-dielectric and symmetric metal-dielectric layers. The design was implemented using TSMC 0.18μm 1P6M standard CMOS process, and the in-house post-CMOS releasing. Typical microphone with acoustic-diaphragm of 300μm-diameter and sensing-electrode of 50μm-long is fabricated and tested. Measurements indicate the sensitivity is −64dBV/Pa at 1kHz under 13.5V bias-voltage. The design enables the CMOS-MEMS microphone having good temperature stability between 30∼90°C.",
"title": ""
},
{
"docid": "d7e2654767d1178871f3f787f7616a94",
"text": "We propose a nonparametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute the final segmentation of the test subject. Such label fusion methods have been shown to yield accurate segmentation, since the use of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures. To the best of our knowledge, this manuscript presents the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multiatlas segmentation algorithms are interpreted as special cases of our framework. We conduct two sets of experiments to validate the proposed methods. In the first set of experiments, we use 39 brain MRI scans - with manually segmented white matter, cerebral cortex, ventricles and subcortical structures - to compare different label fusion algorithms and the widely-used FreeSurfer whole-brain segmentation tool. Our results indicate that the proposed framework yields more accurate segmentation than FreeSurfer and previous label fusion algorithms. In a second experiment, we use brain MRI scans of 282 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal volume changes in a study of aging and Alzheimer's Disease.",
"title": ""
},
{
"docid": "2590725b2b99a6acd2bc8b9f81ad46ee",
"text": "The Internet of Things (IoT) provides the ability for humans and computers to learn and interact from billions of things that include sensors, actuators, services, and other Internet-connected objects. The realization of IoT systems will enable seamless integration of the cyber world with our physical world and will fundamentally change and empower human interaction with the world. A key technology in the realization of IoT systems is middleware, which is usually described as a software system designed to be the intermediary between IoT devices and applications. In this paper, we first motivate the need for an IoT middleware via an IoT application designed for real-time prediction of blood alcohol content using smartwatch sensor data. This is then followed by a survey on the capabilities of the existing IoT middleware. We further conduct a thorough analysis of the challenges and the enabling technologies in developing an IoT middleware that embraces the heterogeneity of IoT devices and also supports the essential ingredients of composition, adaptability, and security aspects of an IoT system.",
"title": ""
},
{
"docid": "764f05288ff0a0bbf77f264fcefb07eb",
"text": "Recent advances in energy harvesting have been intensified due to urgent needs of portable, wireless electronics with extensive life span. The idea of energy harvesting is applicable to sensors that are placed and operated on some entities for a long time, or embedded into structures or human bodies, in which it is troublesome or detrimental to replace the sensor module batteries. Such sensors are commonly called “self-powered sensors.” The energy harvester devices are capable of capturing environmental energy and supplanting the battery in a standalone module, or working along with the battery to extend substantially its life. Vibration is considered one of the most high power and efficient among other ambient energy sources, such as solar energy and temperature difference. Piezoelectric and electromagnetic devices are mostly used to convert vibration to ac electric power. For vibratory harvesting, a delicately designed power conditioning circuit is required to store as much as possible of the device-output power into a battery. The design for this power conditioning needs to be consistent with the electric characteristics of the device and battery to achieve maximum power transfer and efficiency. This study offers an overview on various power conditioning electronic circuits designed for vibratory harvester devices and their applications to self-powered sensors. Comparative comments are provided in terms of circuit topology differences, conversion efficiencies and applicability to a sensor module.",
"title": ""
},
{
"docid": "3076b9f747b1851f5ead6ca46e41970a",
"text": "This paper applies dimensional synthesis to explore the geometric design of dexterous three-fingered robotic hands for maximizing precision manipulation workspace, in which the hand stably moves an object with respect to the palm of the hand, with contacts only on the fingertips. We focus primarily on the tripod grasp, which is the most commonly used grasp for precision manipulation. We systematically explore the space of design parameters, with two main objectives: maximize the workspace of a fully actuated hand and explore how under-actuation modifies it. We use a mathematical framework that models the hand-plus-object system and examine how the workspace varies with changes in nine hand and object parameters such as link length and finger arrangement on the palm. Results show that to achieve the largest workspaces the palm radius should be approximately half of a finger length larger than the target object radius, that the distal link of the two-link fingers should be around 1–1.2 times the length of the proximal link, and that fingers should be arranged symmetrically about the palm with object contacts also symmetric. Furthermore, a proper parameter design for an under-actuated hand can achieve up to 50% of the workspace of a fully actuated hand. When compared to the system parameters of existing popular hand designs, larger palms and longer distal links are needed to maximize the manipulation workspace of the studied design.",
"title": ""
},
{
"docid": "bd7f4a27628506eb707918c990704405",
"text": "A multi database model of distributed information retrieval is presented in which people are assumed to have access to many searchable text databases In such an environment full text information retrieval consists of discovering database contents ranking databases by their expected ability to satisfy the query searching a small number of databases and merging results returned by di erent databases This paper presents algorithms for each task It also discusses how to reorganize conventional test collections into multi database testbeds and evaluation methodologies for multi database experiments A broad and diverse group of experimental results is presented to demonstrate that the algorithms are e ective e cient robust and scalable",
"title": ""
},
{
"docid": "f1325dd1350acf612dc1817db693a3d6",
"text": "Software for the measurement of genetic diversity (SMOGD) is a web-based application for the calculation of the recently proposed genetic diversity indices G'(ST) and D(est) . SMOGD includes bootstrapping functionality for estimating the variance, standard error and confidence intervals of estimated parameters, and SMOGD also generates genetic distance matrices from pairwise comparisons between populations. SMOGD accepts standard, multilocus Genepop and Arlequin formatted input files and produces HTML and tab-delimited output. This allows easy data submission, quick visualization, and rapid import of results into spreadsheet or database programs.",
"title": ""
},
{
"docid": "a6e6cf1473adb05f33b55cb57d6ed6d3",
"text": "In machine learning, data augmentation is the process of creating synthetic examples in order to augment a dataset used to learn a model. One motivation for data augmentation is to reduce the variance of a classifier, thereby reducing error. In this paper, we propose new data augmentation techniques specifically designed for time series classification, where the space in which they are embedded is induced by Dynamic Time Warping (DTW). The main idea of our approach is to average a set of time series and use the average time series as a new synthetic example. The proposed methods rely on an extension of DTW Barycentric Averaging (DBA), the averaging technique that is specifically developed for DTW. In this paper, we extend DBA to be able to calculate a weighted average of time series under DTW. In this case, instead of each time series contributing equally to the final average, some can contribute more than others. This extension allows us to generate an infinite number of new examples from any set of given time series. To this end, we propose three methods that choose the weights associated to the time series of the dataset. We carry out experiments on the 85 datasets of the UCR archive and demonstrate that our method is particularly useful when the number of available examples is limited (e.g. 2 to 6 examples per class) using a 1-NN DTW classifier. Furthermore, we show that augmenting full datasets is beneficial in most cases, as we observed an increase of accuracy on 56 datasets, no effect on 7 and a slight decrease on only 22.",
"title": ""
},
{
"docid": "42a412b11300ec8d7721c1f532dadfb9",
"text": " Most data-driven dependency parsing approaches assume that sentence structure is represented as trees. Although trees have several desirable properties from both computational and linguistic perspectives, the structure of linguistic phenomena that goes beyond shallow syntax often cannot be fully captured by tree representations. We present a parsing approach that is nearly as simple as current data-driven transition-based dependency parsing frameworks, but outputs directed acyclic graphs (DAGs). We demonstrate the benefits of DAG parsing in two experiments where its advantages over dependency tree parsing can be clearly observed: predicate-argument analysis of English and syntactic analysis of Danish with a representation that includes long-distance dependencies and anaphoric reference links.",
"title": ""
},
{
"docid": "d15e7e655e7afc86e30e977516de7720",
"text": "We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the full body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection/localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.",
"title": ""
},
{
"docid": "fab36d134562d6c3a768841ff7e675b7",
"text": "A submicron of Ni/Sn transient liquid phase bonding at low temperature was investigated to surmount nowadays fine-pitch Cu/Sn process challenge. After bonding process, only uniform and high-temperature stable Ni3Sn4 intermetallic compound was existed. In addition, the advantages of this scheme showed excellent electrical and reliability performance and mechanical strength.",
"title": ""
},
{
"docid": "eb0a907ad08990b0fe5e2374079cf395",
"text": "We examine whether tolerance for failure spurs corporate innovation based on a sample of venture capital (VC) backed IPO firms. We develop a novel measure of VC investors’ failure tolerance by examining their tendency to continue investing in a venture conditional on the venture not meeting milestones. We find that IPO firms backed by more failure-tolerant VC investors are significantly more innovative. A rich set of empirical tests shows that this result is not driven by the endogenous matching between failure-tolerant VCs and startups with high exante innovation potentials. Further, we find that the marginal impact of VC failure tolerance on startup innovation varies significantly in the cross section. Being financed by a failure-tolerant VC is much more important for ventures that are subject to high failure risk. Finally, we examine the determinants of the cross-sectional heterogeneity in VC failure tolerance. We find that both capital constraints and career concerns can negatively distort VC failure tolerance. We also show that younger and less experienced VCs are more exposed to these distortions, making them less failure tolerant than more established VCs.",
"title": ""
},
{
"docid": "516ef94fad7f7e5801bf1ef637ffb136",
"text": "With parallelizable attention networks, the neural Transformer is very fast to train. However, due to the auto-regressive architecture and self-attention in the decoder, the decoding procedure becomes slow. To alleviate this issue, we propose an average attention network as an alternative to the self-attention network in the decoder of the neural Transformer. The average attention network consists of two layers, with an average layer that models dependencies on previous positions and a gating layer that is stacked over the average layer to enhance the expressiveness of the proposed attention network. We apply this network on the decoder part of the neural Transformer to replace the original target-side self-attention model. With masking tricks and dynamic programming, our model enables the neural Transformer to decode sentences over four times faster than its original version with almost no loss in training time and translation performance. We conduct a series of experiments on WMT17 translation tasks, where on 6 different language pairs, we obtain robust and consistent speed-ups in decoding.1",
"title": ""
},
{
"docid": "98f76e0ea0f028a1423e1838bdebdccb",
"text": "An operational-transconductance-amplifier (OTA) design for ultra-low voltage ultra-low power applications is proposed. The input stage of the proposed OTA utilizes a bulk-driven pseudo-differential pair to allow minimum supply voltage while achieving a rail-to-rail input range. All the transistors in the proposed OTA operate in the subthreshold region. Using a novel self-biasing technique to bias the OTA obviates the need for extra biasing circuitry and enhances the performance of the OTA. The proposed technique ensures the OTA robustness to process variations and increases design feasibility under ultra-low-voltage conditions. Moreover, the proposed biasing technique significantly improves the common-mode and power-supply rejection of the OTA. To further enhance the bandwidth and allow the use of smaller compensation capacitors, a compensation network based on a damping-factor control circuit is exploited. The OTA is fabricated in a 65 nm CMOS technology. Measurement results show that the OTA provides a low-frequency gain of 46 dB and rail-to-rail input common-mode range with a supply voltage as low as 0.5 V. The dc gain of the OTA is greater than 42 dB for supply voltage as low as 0.35 V. The power dissipation is 182 μW at VDD=0.5 V and 17 μW at VDD=0.35 V.",
"title": ""
},
{
"docid": "ca29fee64e9271e8fce675e970932af1",
"text": "This paper considers univariate online electricity demand forecasting for lead times from a half-hour-ahead to a day-ahead. A time series of demand recorded at half-hourly intervals contains more than one seasonal pattern. A within-day seasonal cycle is apparent from the similarity of the demand profile from one day to the next, and a within-week seasonal cycle is evident when one compares the demand on the corresponding day of adjacent weeks. There is strong appeal in using a forecasting method that is able to capture both seasonalities. The multiplicative seasonal ARIMA model has been adapted for this purpose. In this paper, we adapt the Holt-Winters exponential smoothing formulation so that it can accommodate two seasonalities. We correct for residual autocorrelation using a simple autoregressive model. The forecasts produced by the new double seasonal Holt-Winters method outperform those from traditional Holt-Winters and from a well-specified multiplicative double seasonal ARIMA model.",
"title": ""
},
{
"docid": "8ccb6c767704bc8aee424d17cf13d1e3",
"text": "In this paper, we present a page classification application in a banking workflow. The proposed architecture represents administrative document images by merging visual and textual descriptions. The visual description is based on a hierarchical representation of the pixel intensity distribution. The textual description uses latent semantic analysis to represent document content as a mixture of topics. Several off-the-shelf classifiers and different strategies for combining visual and textual cues have been evaluated. A final step uses an $$n$$ n -gram model of the page stream allowing a finer-grained classification of pages. The proposed method has been tested in a real large-scale environment and we report results on a dataset of 70,000 pages.",
"title": ""
}
] |
scidocsrr
|
509d38ceda71f68928cfcc16c6e5e604
|
Protected area needs in a changing climate
|
[
{
"docid": "a28be57b2eb045a525184b67afb14bb2",
"text": "Climate change has already triggered species distribution shifts in many parts of the world. Increasing impacts are expected for the future, yet few studies have aimed for a general understanding of the regional basis for species vulnerability. We projected late 21st century distributions for 1,350 European plants species under seven climate change scenarios. Application of the International Union for Conservation of Nature and Natural Resources Red List criteria to our projections shows that many European plant species could become severely threatened. More than half of the species we studied could be vulnerable or threatened by 2080. Expected species loss and turnover per pixel proved to be highly variable across scenarios (27-42% and 45-63% respectively, averaged over Europe) and across regions (2.5-86% and 17-86%, averaged over scenarios). Modeled species loss and turnover were found to depend strongly on the degree of change in just two climate variables describing temperature and moisture conditions. Despite the coarse scale of the analysis, species from mountains could be seen to be disproportionably sensitive to climate change (approximately 60% species loss). The boreal region was projected to lose few species, although gaining many others from immigration. The greatest changes are expected in the transition between the Mediterranean and Euro-Siberian regions. We found that risks of extinction for European plants may be large, even in moderate scenarios of climate change and despite inter-model variability.",
"title": ""
}
] |
[
{
"docid": "795a4d9f2dc10563dfee28c3b3cd0f08",
"text": "A wide-band probe fed patch antenna with low cross polarization and symmetrical broadside radiation pattern is proposed and studied. By employing a novel meandering probe feed and locating a patch about 0.1/spl lambda//sub 0/ above a ground plane, a patch antenna with 30% impedance bandwidth (SWR<2) and 9 dBi gain is designed. The far field radiation pattern of the antenna is stable across the operating bandwidth. Parametric studies and design guidelines of the proposed feeding structure are provided.",
"title": ""
},
{
"docid": "72c79b86a91f7c8453cd6075314a6b4d",
"text": "This talk aims to introduce LATEX users to XSL-FO. It does not attempt to give an exhaustive view of XSL-FO, but allows a LATEX user to get started. We show the common and different points between these two approaches of word processing.",
"title": ""
},
{
"docid": "888de1004e212e1271758ac35ff9807d",
"text": "We present the design and implementation of iVoLVER, a tool that allows users to create visualizations without textual programming. iVoLVER is designed to enable flexible acquisition of many types of data (text, colors, shapes, quantities, dates) from multiple source types (bitmap charts, webpages, photographs, SVGs, CSV files) and, within the same canvas, supports transformation of that data through simple widgets to construct interactive animated visuals. Aside from the tool, which is web-based and designed for pen and touch, we contribute the design of the interactive visual language and widgets for extraction, transformation, and representation of data. We demonstrate the flexibility and expressive power of the tool through a set of scenarios, and discuss some of the challenges encountered and how the tool fits within the current infovis tool landscape.",
"title": ""
},
{
"docid": "718e31eabfd386768353f9b75d9714eb",
"text": "The mathematical structure of Sudoku puzzles is akin to hard constraint satisfaction problems lying at the basis of many applications, including protein folding and the ground-state problem of glassy spin systems. Via an exact mapping of Sudoku into a deterministic, continuous-time dynamical system, here we show that the difficulty of Sudoku translates into transient chaotic behavior exhibited by this system. We also show that the escape rate κ, an invariant of transient chaos, provides a scalar measure of the puzzle's hardness that correlates well with human difficulty ratings. Accordingly, η = -log₁₀κ can be used to define a \"Richter\"-type scale for puzzle hardness, with easy puzzles having 0 < η ≤ 1, medium ones 1 < η ≤ 2, hard with 2 < η ≤ 3 and ultra-hard with η > 3. To our best knowledge, there are no known puzzles with η > 4.",
"title": ""
},
{
"docid": "b2817d85893a624574381eee4f8648db",
"text": "A coupled-fed antenna design capable of covering eight-band WWAN/LTE operation in a smartphone and suitable to integrate with a USB connector is presented. The antenna comprises an asymmetric T-shaped monopole as a coupling feed and a radiator as well, and a coupled-fed loop strip shorted to the ground plane. The antenna generates a wide lower band to cover (824-960 MHz) for GSM850/900 operation and a very wide upper band of larger than 1 GHz to cover the GPS/GSM1800/1900/UMTS/LTE2300/2500 operation (1565-2690 MHz). The proposed antenna provides wideband operation and exhibits great flexible behavior. The antenna is capable of providing eight-band operation for nine different sizes of PCBs, and enhance impedance matching only by varying a single element length, L. Details of proposed antenna, parameters and performance are presented and discussed in this paper.",
"title": ""
},
{
"docid": "d197875ea8637bf36d2746a2a1861c23",
"text": "There are billions of Internet of things (IoT) devices connecting to the Internet and the number is increasing. As a still ongoing technology, IoT can be used in different fields, such as agriculture, healthcare, manufacturing, energy, retailing and logistics. IoT has been changing our world and the way we live and think. However, IoT has no uniform architecture and there are different kinds of attacks on the different layers of IoT, such as unauthorized access to tags, tag cloning, sybil attack, sinkhole attack, denial of service attack, malicious code injection, and man in middle attack. IoT devices are more vulnerable to attacks because it is simple and some security measures can not be implemented. We analyze the privacy and security challenges in the IoT and survey on the corresponding solutions to enhance the security of IoT architecture and protocol. We should focus more on the security and privacy on IoT and help to promote the development of IoT.",
"title": ""
},
{
"docid": "3d12dea4ae76c5af54578262996fe0bb",
"text": "We introduce a two-layer undirected graphical model, calle d a “Replicated Softmax”, that can be used to model and automatically extract low -dimensional latent semantic representations from a large unstructured collec ti n of documents. We present efficient learning and inference algorithms for thi s model, and show how a Monte-Carlo based method, Annealed Importance Sampling, c an be used to produce an accurate estimate of the log-probability the model a ssigns to test data. This allows us to demonstrate that the proposed model is able to g neralize much better compared to Latent Dirichlet Allocation in terms of b th the log-probability of held-out documents and the retrieval accuracy.",
"title": ""
},
{
"docid": "a58930da8179d71616b8b6ef01ed1569",
"text": "Collecting sensor data results in large temporal data sets which need to be visualized, analyzed, and presented. One-dimensional time-series charts are used, but these present problems when screen resolution is small in comparison to the data. This can result in severe over-plotting, giving rise for the requirement to provide effective rendering and methods to allow interaction with the detailed data. Common solutions can be categorized as multi-scale representations, frequency based, and lens based interaction techniques. In this paper, we comparatively evaluate existing methods, such as Stack Zoom [15] and ChronoLenses [38], giving a graphical overview of each and classifying their ability to explore and interact with data. We propose new visualizations and other extensions to the existing approaches. We undertake and report an empirical study and a field study using these techniques.",
"title": ""
},
{
"docid": "73adcdf18b86ab3598731d75ac655f2c",
"text": "Many individuals exhibit unconscious body movements called mannerisms while speaking. These repeated changes often distract the audience when not relevant to the verbal context. We present an intelligent interface that can automatically extract human gestures using Microsoft Kinect to make speakers aware of their mannerisms. We use a sparsity-based algorithm, Shift Invariant Sparse Coding, to automatically extract the patterns of body movements. These patterns are displayed in an interface with subtle question and answer-based feedback scheme that draws attention to the speaker's body language. Our formal evaluation with 27 participants shows that the users became aware of their body language after using the system. In addition, when independent observers annotated the accuracy of the algorithm for every extracted pattern, we find that the patterns extracted by our algorithm is significantly (p<0.001) more accurate than just random selection. This represents a strong evidence that the algorithm is able to extract human-interpretable body movement patterns. An interactive demo of AutoManner is available at http://tinyurl.com/AutoManner.",
"title": ""
},
{
"docid": "154c40c2fab63ad15ded9b341ff60469",
"text": "ICU mortality risk prediction may help clinicians take effective interventions to improve patient outcome. Existing machine learning approaches often face challenges in integrating a comprehensive panel of physiologic variables and presenting to clinicians interpretable models. We aim to improve both accuracy and interpretability of prediction models by introducing Subgraph Augmented Non-negative Matrix Factorization (SANMF) on ICU physiologic time series. SANMF converts time series into a graph representation and applies frequent subgraph mining to automatically extract temporal trends. We then apply non-negative matrix factorization to group trends in a way that approximates patient pathophysiologic states. Trend groups are then used as features in training a logistic regression model for mortality risk prediction, and are also ranked according to their contribution to mortality risk. We evaluated SANMF against four empirical models on the task of predicting mortality or survival 30 days after discharge from ICU using the observed physiologic measurements between 12 and 24 hours after admission. SANMF outperforms all comparison models, and in particular, demonstrates an improvement in AUC (0.848 vs. 0.827, p<0.002) compared to a state-of-the-art machine learning method that uses manual feature engineering. Feature analysis was performed to illuminate insights and benefits of subgraph groups in mortality risk prediction.",
"title": ""
},
{
"docid": "bfa38fded95303834d487cb27d228ad7",
"text": "Apparel classification encompasses the identification of an outfit in an image. The area has its applications in social media advertising, e-commerce and criminal law. In our work, we introduce a new method for shopping apparels online. This paper describes our approach to classify images using Convolutional Neural Networks. We concentrate mainly on two aspects of apparel classification: (1) Multiclass classification of apparel type and (2) Similar Apparel retrieval based on the query image. This shopping technique relieves the burden of storing a lot of information related to the images and traditional ways of filtering search results can be replaced by image filters",
"title": ""
},
{
"docid": "73bf620a97b2eadeb2398dd718b85fe8",
"text": "The Semeval task 5 was an opportunity for experimenting with the key term extraction module of GROBID, a system for extracting and generating bibliographical information from technical and scientific documents. The tool first uses GROBID’s facilities for analyzing the structure of scientific articles, resulting in a first set of structural features. A second set of features captures content properties based on phraseness, informativeness and keywordness measures. Two knowledge bases, GRISP and Wikipedia, are then exploited for producing a last set of lexical/semantic features. Bagged decision trees appeared to be the most efficient machine learning algorithm for generating a list of ranked key term candidates. Finally a post ranking was realized based on statistics of cousage of keywords in HAL, a large Open Access publication repository.",
"title": ""
},
{
"docid": "80ff93b5f2e0ff3cff04c314e28159fc",
"text": "In the past 30 years there has been a growing body of research using different methods (behavioural, electrophysiological, neuropsychological, TMS and imaging studies) asking whether processing words from different grammatical classes (especially nouns and verbs) engage different neural systems. To date, however, each line of investigation has provided conflicting results. Here we present a review of this literature, showing that once we take into account the confounding in most studies between semantic distinctions (objects vs. actions) and grammatical distinction (nouns vs. verbs), and the conflation between studies concerned with mechanisms of single word processing and those studies concerned with sentence integration, the emerging picture is relatively clear-cut: clear neural separability is observed between the processing of object words (nouns) and action words (typically verbs), grammatical class effects emerge or become stronger for tasks and languages imposing greater processing demands. These findings indicate that grammatical class per se is not an organisational principle of knowledge in the brain; rather, all the findings we review are compatible with two general principles described by typological linguistics as underlying grammatical class membership across languages: semantic/pragmatic, and distributional cues in language that distinguish nouns from verbs. These two general principles are incorporated within an emergentist view which takes these constraints into account.",
"title": ""
},
{
"docid": "f8b0dcd771e7e7cf50a05cf7221f4535",
"text": "Studies on monocyte and macrophage biology and differentiation have revealed the pleiotropic activities of these cells. Macrophages are tissue sentinels that maintain tissue integrity by eliminating/repairing damaged cells and matrices. In this M2-like mode, they can also promote tumor growth. Conversely, M1-like macrophages are key effector cells for the elimination of pathogens, virally infected, and cancer cells. Macrophage differentiation from monocytes occurs in the tissue in concomitance with the acquisition of a functional phenotype that depends on microenvironmental signals, thereby accounting for the many and apparently opposed macrophage functions. Many questions arise. When monocytes differentiate into macrophages in a tissue (concomitantly adopting a specific functional program, M1 or M2), do they all die during the inflammatory reaction, or do some of them survive? Do those that survive become quiescent tissue macrophages, able to react as naïve cells to a new challenge? Or, do monocyte-derived tissue macrophages conserve a \"memory\" of their past inflammatory activation? This review will address some of these important questions under the general framework of the role of monocytes and macrophages in the initiation, development, resolution, and chronicization of inflammation.",
"title": ""
},
{
"docid": "f71b1df36ee89cdb30a1dd29afc532ea",
"text": "Finite state machines are a standard tool to model event-based control logic, and dynamic programming is a staple of optimal decision-making. We combine these approaches in the context of radar resource management for Naval surface warfare. There is a friendly (Blue) force in the open sea, equipped with one multi-function radar and multiple ships. The enemy (Red) force consists of missiles that target the Blue force's radar. The mission of the Blue force is to foil the enemy's threat by careful allocation of radar resources. Dynamically composed finite state machines are used to formalize the model of the battle space and dynamic programming is applied to our dynamic state machine model to generate an optimal policy. To achieve this in near-real-time and a changing environment, we use approximate dynamic programming methods. Example scenario illustrating the model and simulation results are presented.",
"title": ""
},
{
"docid": "8bdd02547be77f4c825c9aed8016ddf8",
"text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.",
"title": ""
},
{
"docid": "232bf10d578c823b0cd98a3641ace44a",
"text": "The effect of economic globalization on the number of transnational terrorist incidents within countries is analyzed statistically, using a sample of 112 countries from 1975 to 1997. Results show that trade, foreign direct investment (FDI), and portfolio investment have no direct positive effect on transnational terrorist incidents within countries and that economic developments of a country and its top trading partners reduce the number of terrorist incidents inside the country. To the extent that trade and FDI promote economic development, they have an indirect negative effect on transnational terrorism.",
"title": ""
},
{
"docid": "66fd7de53986e8c4a7ed08ed88f0b45b",
"text": "BACKGROUND\nConcerns regarding the risk of estrogen replacement have resulted in a significant increase in the use of soy products by menopausal women who, despite the lack of evidence of the efficacy of such products, seek alternatives to menopausal hormone therapy. Our goal was to determine the efficacy of soy isoflavone tablets in preventing bone loss and menopausal symptoms.\n\n\nMETHODS\nThe study design was a single-center, randomized, placebo-controlled, double-blind clinical trial conducted from July 1, 2004, through March 31, 2009. Women aged 45 to 60 years within 5 years of menopause and with a bone mineral density T score of -2.0 or higher in the lumbar spine or total hip were randomly assigned, in equal proportions, to receive daily soy isoflavone tablets, 200 mg, or placebo. The primary outcome was changes in bone mineral density in the lumbar spine, total hip, and femoral neck at the 2-year follow-up. Secondary outcomes included changes in menopausal symptoms, vaginal cytologic characteristics, N -telopeptide of type I bone collagen, lipids, and thyroid function.\n\n\nRESULTS\nAfter 2 years, no significant differences were found between the participants receiving soy tablets (n = 122) and those receiving placebo (n = 126) regarding changes in bone mineral density in the spine (-2.0% and -2.3%, respectively), the total hip (-1.2% and -1.4%, respectively), or the femoral neck (-2.2% and -2.1%, respectively). A significantly larger proportion of participants in the soy group experienced hot flashes and constipation compared with the control group. No significant differences were found between groups in other outcomes.\n\n\nCONCLUSIONS\nIn this population, the daily administration of tablets containing 200 mg of soy isoflavones for 2 years did not prevent bone loss or menopausal symptoms.\n\n\nTRIAL REGISTRATION\nclinicaltrials.gov Identifier: NCT00076050.",
"title": ""
},
{
"docid": "a63db4f5e588e23e4832eae581fc1c4b",
"text": "Driver drowsiness is a major cause of mortality in traffic accidents worldwide. Electroencephalographic (EEG) signal, which reflects the brain activities, is more directly related to drowsiness. Thus, many Brain-Machine-Interface (BMI) systems have been proposed to detect driver drowsiness. However, detecting driver drowsiness at its early stage poses a major practical hurdle when using existing BMI systems. This study proposes a context-aware BMI system aimed to detect driver drowsiness at its early stage by enriching the EEG data with the intensity of head-movements. The proposed system is carefully designed for low-power consumption with on-chip feature extraction and low energy Bluetooth connection. Also, the proposed system is implemented using JAVA programming language as a mobile application for on-line analysis. In total, 266 datasets obtained from six subjects who participated in a one-hour monotonous driving simulation experiment were used to evaluate this system. According to a video-based reference, the proposed system obtained an overall detection accuracy of 82.71% for classifying alert and slightly drowsy events by using EEG data alone and 96.24% by using the hybrid data of head-movement and EEG. These results indicate that the combination of EEG data and head-movement contextual information constitutes a robust solution for the early detection of driver drowsiness.",
"title": ""
},
{
"docid": "dba13fea4538f23ea1208087d3e81d6b",
"text": "This paper investigates the effectiveness of using MeSH® in PubMed through its automatic query expansion process: Automatic Term Mapping (ATM). We run Boolean searches based on a collection of 55 topics and about 160,000 MEDLINE® citations used in the 2006 and 2007 TREC Genomics Tracks. For each topic, we first automatically construct a query by selecting keywords from the question. Next, each query is expanded by ATM, which assigns different search tags to terms in the query. Three search tags: [MeSH Terms], [Text Words], and [All Fields] are chosen to be studied after expansion because they all make use of the MeSH field of indexed MEDLINE citations. Furthermore, we characterize the two different mechanisms by which the MeSH field is used. Retrieval results using MeSH after expansion are compared to those solely based on the words in MEDLINE title and abstracts. The aggregate retrieval performance is assessed using both F-measure and mean rank precision. Experimental results suggest that query expansion using MeSH in PubMed can generally improve retrieval performance, but the improvement may not affect end PubMed users in realistic situations.",
"title": ""
}
] |
scidocsrr
|
f01b3bcc1e3f6ba62a91414f97d33d8d
|
Marketplace or Reseller?
|
[
{
"docid": "c7d629a83de44e17a134a785795e26d8",
"text": "How can firms profitably give away free products? This paper provides a novel answer and articulates tradeoffs in a space of information product design. We introduce a formal model of two-sided network externalities based in textbook economics—a mix of Katz & Shapiro network effects, price discrimination, and product differentiation. Externality-based complements, however, exploit a different mechanism than either tying or lock-in even as they help to explain many recent strategies such as those of firms selling operating systems, Internet browsers, games, music, and video. The model presented here argues for three simple but useful results. First, even in the absence of competition, a firm can rationally invest in a product it intends to give away into perpetuity. Second, we identify distinct markets for content providers and end consumers and show that either can be a candidate for a free good. Third, product coupling across markets can increase consumer welfare even as it increases firm profits. The model also generates testable hypotheses on the size and direction of network effects while offering insights to regulators seeking to apply antitrust law to network markets. ACKNOWLEDGMENTS: We are grateful to participants of the 1999 Workshop on Information Systems and Economics, the 2000 Association for Computing Machinery SIG E-Commerce, the 2000 International Conference on Information Systems, the 2002 Stanford Institute for Theoretical Economics (SITE) workshop on Internet Economics, the 2003 Insitut D’Economie Industrielle second conference on “The Economics of the Software and Internet Industries,” as well as numerous participants at university seminars. We wish to thank Tom Noe for helpful observations on oligopoly markets, Lones Smith, Kai-Uwe Kuhn, and Jovan Grahovac for corrections and model generalizations, Jeff MacKie-Mason for valuable feedback on model design and bundling, and Hal Varian for helpful comments on firm strategy and model implications. Frank Fisher provided helpful advice on and knowledge of the Microsoft trial. Jean Tirole provided useful suggestions and examples, particularly in regard to credit card markets. Paul Resnick proposed the descriptive term “internetwork” externality to describe two-sided network externalities. Tom Eisenmann provided useful feedback and examples. We also thank Robert Gazzale, Moti Levi, and Craig Newmark for their many helpful observations. This research has been supported by NSF Career Award #IIS 9876233. For an earlier version of the paper that also addresses bundling and competition, please see “Information Complements, Substitutes, and Strategic Product Design,” November 2000, http://ssrn.com/abstract=249585.",
"title": ""
},
{
"docid": "4a87e61106125ffdd49c42517ce78b87",
"text": "Due to network effects and switching costs, platform providers often become entrenched. To dislodge them, entrants generally must offer revolutionary products. We explore a second path to platform leadership change that does not rely on Schumpeterian creative destruction: platform envelopment. By leveraging common components and shared user relationships, one platform provider can move into another’s market, combining its own functionality with the target’s in a multi-platform bundle. Dominant firms otherwise sheltered from entry by standalone rivals may be vulnerable to an adjacent platform provider’s envelopment attack. We analyze conditions under which envelopment strategies are likely to succeed.",
"title": ""
},
{
"docid": "58c2f9f5f043f87bc51d043f70565710",
"text": "T strategic use of first-party content by two-sided platforms is driven by two key factors: the nature of buyer and seller expectations (favorable versus unfavorable) and the nature of the relationship between first-party content and third-party content (complements or substitutes). Platforms facing unfavorable expectations face an additional constraint: their prices and first-party content investment need to be such that low (zero) participation equilibria are eliminated. This additional constraint typically leads them to invest more (less) in first-party content relative to platforms facing favorable expectations when firstand third-party content are substitutes (complements). These results hold with both simultaneous and sequential entry of the two sides. With two competing platforms—incumbent facing favorable expectations and entrant facing unfavorable expectations— and multi-homing on one side of the market, the incumbent always invests (weakly) more in first-party content relative to the case in which it is a monopolist.",
"title": ""
}
] |
[
{
"docid": "14e5e95ae4422120f5f1bb8cccb2b186",
"text": "We describe an approach to understand the peculiar and counterintuitive generalization properties of deep neural networks. The approach involves going beyond worst-case theoretical capacity control frameworks that have been popular in machine learning in recent years to revisit old ideas in the statistical mechanics of neural networks. Within this approach, we present a prototypical Very Simple Deep Learning (VSDL) model, whose behavior is controlled by two control parameters, one describing an effective amount of data, or load, on the network (that decreases when noise is added to the input), and one with an effective temperature interpretation (that increases when algorithms are early stopped). Using this model, we describe how a very simple application of ideas from the statistical mechanics theory of generalization provides a strong qualitative description of recently-observed empirical results regarding the inability of deep neural networks not to overfit training data, discontinuous learning and sharp transitions in the generalization properties of learning algorithms, etc.",
"title": ""
},
{
"docid": "8bcda11934a1eaff4b41cbe695bbfc4f",
"text": "Back-propagation has been the workhorse of recent successes of deep learning but it relies on infinitesimal effects (partial derivatives) in order to perform credit assignment. This could become a serious issue as one considers deeper and more non-linear functions, e.g., consider the extreme case of non-linearity where the relation between parameters and cost is actually discrete. Inspired by the biological implausibility of back-propagation, a few approaches have been proposed in the past that could play a similar credit assignment role as backprop. In this spirit, we explore a novel approach to credit assignment in deep networks that we call target propagation. The main idea is to compute targets rather than gradients, at each layer. Like gradients, they are propagated backwards. In a way that is related but different from previously proposed proxies for back-propagation which rely on a backwards network with symmetric weights, target propagation relies on auto-encoders at each layer. Unlike back-propagation, it can be applied even when units exchange stochastic bits rather than real numbers. We show that a linear correction for the imperfectness of the auto-encoders is very effective to make target propagation actually work, along with adaptive learning rates.",
"title": ""
},
{
"docid": "a9e27b52ed31b47c23b1281c28556487",
"text": "Nuclear receptors are integrators of hormonal and nutritional signals, mediating changes to metabolic pathways within the body. Given that modulation of lipid and glucose metabolism has been linked to diseases including type 2 diabetes, obesity and atherosclerosis, a greater understanding of pathways that regulate metabolism in physiology and disease is crucial. The liver X receptors (LXRs) and the farnesoid X receptors (FXRs) are activated by oxysterols and bile acids, respectively. Mounting evidence indicates that these nuclear receptors have essential roles, not only in the regulation of cholesterol and bile acid metabolism but also in the integration of sterol, fatty acid and glucose metabolism.",
"title": ""
},
{
"docid": "77b1e7b6f91cf5e2d4380a9d117ae7d9",
"text": "This paper theoretically introduces and develops a new operation diagram (OPD) and parameter estimator for the synchronous reluctance machine (SynRM). The OPD demonstrates the behavior of the machine's main performance parameters, such as torque, current, voltage, frequency, flux, power factor (PF), and current angle, all in one graph. This diagram can easily be used to describe different control strategies, possible operating conditions, both below- and above-rated speeds, etc. The saturation effect is also discussed with this diagram by finite-element-method calculations. A prototype high-performance SynRM is designed for experimental studies, and then, both machines' [corresponding induction machine (IM)] performances at similar loading and operation conditions are tested, measured, and compared to demonstrate the potential of SynRM. The laboratory measurements (on a standard 15-kW Eff1 IM and its counterpart SynRM) show that SynRM has higher efficiency, torque density, and inverter rating and lower rotor temperature and PF in comparison to IM at the same winding-temperature-rise condition. The measurements show that the torque capability of SynRM closely follows that of IM.",
"title": ""
},
{
"docid": "30740e33cdb2c274dbd4423e8f56405e",
"text": "A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.",
"title": ""
},
{
"docid": "9adf653a332e07b8aa055b62449e1475",
"text": "False-belief task have mainly been associated with the explanatory notion of the theory of mind and the theory-theory. However, it has often been pointed out that this kind of highlevel reasoning is computational and time expensive. During the last decades, the idea of embodied intelligence, i.e. complex behavior caused by sensorimotor contingencies, has emerged in both the fields of neuroscience, psychology and artificial intelligence. Viewed from this perspective, the failing in a false-belief test can be the result of the impairment to recognize and track others’ sensorimotor contingencies and affordances. Thus, social cognition is explained in terms of lowlevel signals instead of high-level reasoning. In this work, we present a generative model for optimal action selection which simultaneously can be employed to make predictions of others’ actions. As we base the decision making on a hidden state representation of sensorimotor signals, this model is in line with the ideas of embodied intelligence. We demonstrate how the tracking of others’ hidden states can give rise to correct falsebelief inferences, while a lack thereof leads to failing. With this work, we want to emphasize the importance of sensorimotor contingencies in social cognition, which might be a key to artificial, socially intelligent systems.",
"title": ""
},
{
"docid": "3e43ee5513a0bd8bea8b1ea5cf8cefec",
"text": "Hans-Juergen Boehm Computer Science Department, Rice University, Houston, TX 77251-1892, U.S.A. Mark Weiser Xerox Corporation, Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, CA 94304, U.S.A. A later version of this paper appeared in Software Practice and Experience 18, 9, pp. 807-820. Copyright 1988 by John Wiley and Sons, Ld. The publishers rules appear to allow posting of preprints, but only on the author’s web site.",
"title": ""
},
{
"docid": "4107fe17e6834f96a954e13cbb920f78",
"text": "Non-orthogonal multiple access (NOMA) can support more users than OMA techniques using the same wireless resources, which is expected to support massive connectivity for Internet of Things in 5G. Furthermore, in order to reduce the transmission latency and signaling overhead, grant-free transmission is highly expected in the uplink NOMA systems, where user activity has to be detected. In this letter, by exploiting the temporal correlation of active user sets, we propose a dynamic compressive sensing (DCS)-based multi-user detection (MUD) to realize both user activity and data detection in several continuous time slots. In particular, as the temporal correlation of the active user sets between adjacent time slots exists, we can use the estimated active user set in the current time slot as the prior information to estimate the active user set in the next time slot. Simulation results show that the proposed DCS-based MUD can achieve much better performance than that of the conventional CS-based MUD in NOMA systems.",
"title": ""
},
{
"docid": "4afbb5f877f3920dccdf60f6f4dfbf91",
"text": "Handling degenerate rotation-only camera motion is a challenge for keyframe-based simultaneous localization and mapping with six degrees of freedom. Existing systems usually filter corresponding keyframe candidates, resulting in mapping starvation and tracking failure. We propose to employ these otherwise discarded keyframes to build up local panorama maps registered in the 3D map. Thus, the system is able to maintain tracking during rotational camera motions. Additionally, we seek to actively associate panoramic and 3D map data for improved 3D mapping through the triangulation of more new 3D map features. We demonstrate the efficacy of our approach in several evaluations that show how the combined system handles rotation only camera motion while creating larger and denser maps compared to a standard SLAM system.",
"title": ""
},
{
"docid": "8a6b9930a9dccb0555980140dd6c4ae4",
"text": "The mass shooting at Sandy Hook elementary school on December 14, 2012 catalyzed a year of active debate and legislation on gun control in the United States. Social media hosted an active public discussion where people expressed their support and opposition to a variety of issues surrounding gun legislation. In this paper, we show how a contentbased analysis of Twitter data can provide insights and understanding into this debate. We estimate the relative support and opposition to gun control measures, along with a topic analysis of each camp by analyzing over 70 million gun-related tweets from 2013. We focus on spikes in conversation surrounding major events related to guns throughout the year. Our general approach can be applied to other important public health and political issues to analyze the prevalence and nature of public opinion.",
"title": ""
},
{
"docid": "725e92f13cc7c03b890b5d2e7380b321",
"text": "Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as “the curse of dimensionality”. This paper presents a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated as a control theory problem and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Numerical results on examples including the nonlinear Black-Scholes equation, the Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that the proposed algorithm is quite effective in high dimensions, in terms of both accuracy and speed. This opens up new possibilities in economics, finance, operational research, and physics, by considering all participating agents, assets, resources, or particles together at the same time, instead of making ad hoc assumptions on their inter-relationships.",
"title": ""
},
{
"docid": "8b158bfaf805974c1f8478c7ac051426",
"text": "BACKGROUND AND AIMS\nThe analysis of large-scale genetic data from thousands of individuals has revealed the fact that subtle population genetic structure can be detected at levels that were previously unimaginable. Using the Human Genome Diversity Panel as reference (51 populations - 650,000 SNPs), this works describes a systematic evaluation of the resolution that can be achieved for the inference of genetic ancestry, even when small panels of genetic markers are used.\n\n\nMETHODS AND RESULTS\nA comprehensive investigation of human population structure around the world is undertaken by leveraging the power of Principal Components Analysis (PCA). The problem is dissected into hierarchical steps and a decision tree for the prediction of individual ancestry is proposed. A complete leave-one-out validation experiment demonstrates that, using all available SNPs, assignment of individuals to their self-reported populations of origin is essentially perfect. Ancestry informative genetic markers are selected using two different metrics (In and correlation with PCA scores). A thorough cross-validation experiment indicates that, in most cases here, the number of SNPs needed for ancestry inference can be successfully reduced to less than 0.1% of the original 650,000 while retaining close to 100% accuracy. This reduction can be achieved using a novel clustering-based redundancy removal algorithm that is also introduced here. Finally, the applicability of our suggested SNP panels is tested on HapMap Phase 3 populations.\n\n\nCONCLUSION\nThe proposed methods and ancestry informative marker panels, in combination with the increasingly more comprehensive databases of human genetic variation, open new horizons in a variety of fields, ranging from the study of human evolution and population history, to medical genetics and forensics.",
"title": ""
},
{
"docid": "2052d056e4f4831ebd9992882e8e4015",
"text": "Soccer video semantic analysis has attracted a lot of researchers in the last few years. Many methods of machine learning have been applied to this task and have achieved some positive results, but the neural network method has not yet been used to this task from now. Taking into account the advantages of Convolution Neural Network(CNN) in fully exploiting features and the ability of Recurrent Neural Network(RNN) in dealing with the temporal relation, we construct a deep neural network to detect soccer video event in this paper. First we determine the soccer video event boundary which we used Play-Break(PB) segment by the traditional method. Then we extract the semantic features of key frames from PB segment by pre-trained CNN, and at last use RNN to map the semantic features of PB to soccer event types, including goal, goal attempt, card and corner. Because there is no suitable and effective dataset, we classify soccer frame images into nine categories according to their different semantic views and then construct a dataset called Soccer Semantic Image Dataset(SSID) for training CNN. The sufficient experiments evaluated on 30 soccer match videos demonstrate the effectiveness of our method than state-of-art methods.",
"title": ""
},
{
"docid": "7a6181a65121ce577bc77711ce7a095c",
"text": "We present a new, general, and real-time technique for soft global illumination in low-frequency environmental lighting. It accumulates over relatively few spherical proxies which approximate the light blocking and re-radiating effect of dynamic geometry. Soft shadows are computed by accumulating log visibility vectors for each sphere proxy as seen by each receiver point. Inter-reflections are computed by accumulating vectors representing the proxy's unshadowed radiance when illuminated by the environment. Both vectors capture low-frequency directional dependence using the spherical harmonic basis. We also present a new proxy accumulation strategy that splats each proxy to receiver pixels in image space to collect its shadowing and indirect lighting contribution. Our soft GI rendering pipeline unifies direct and indirect soft effects with a simple accumulation strategy that maps entirely to the GPU and outperforms previous vertex-based methods.",
"title": ""
},
{
"docid": "2d98a90332278049d61a6eb431317216",
"text": "Feature extraction is a method of capturing visual content of an image. The feature extraction is the process to represent raw image in its reduced form to facilitate decision making such as pattern classification. We have tried to address the problem of classification MRI brain images by creating a robust and more accurate classifier which can act as an expert assistant to medical practitioners. The objective of this paper is to present a novel method of feature selection and extraction. This approach combines the Intensity, Texture, shape based features and classifies the tumor as white matter, Gray matter, CSF, abnormal and normal area. The experiment is performed on 140 tumor contained brain MR images from the Internet Brain Segmentation Repository. The proposed technique has been carried out over a larger database as compare to any previous work and is more robust and effective. PCA and Linear Discriminant Analysis (LDA) were applied on the training sets. The Support Vector Machine (SVM) classifier served as a comparison of nonlinear techniques Vs linear ones. PCA and LDA methods are used to reduce the number of features used. The feature selection using the proposed technique is more beneficial as it analyses the data according to grouping class variable and gives reduced feature set with high classification accuracy.",
"title": ""
},
{
"docid": "b4a2c3679fe2490a29617c6a158b9dbc",
"text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.",
"title": ""
},
{
"docid": "61e460c93d82acf80983f5947154b139",
"text": "The Internet has many benefits, some of them are to gain knowledge and gain the latest information. The internet can be used by anyone and can contain any information, including negative content such as pornographic content, radicalism, racial intolerance, violence, fraud, gambling, security and drugs. Those contents cause the number of children victims of pornography on social media increasing every year. Based on that, it needs a system that detects pornographic content on social media. This study aims to determine the best model to detect the pornographic content. Model selection is determined based on unigram and bigram features, classification algorithm, k-fold cross validation. The classification algorithm used is Support Vector Machine and Naive Bayes. The highest F1-score is yielded by the model with combination of Support Vector Machine, most common words, and combination of unigram and bigram, which returns F1-Score value of 91.14%.",
"title": ""
},
{
"docid": "85c32427a1a6c04e3024d22b03b26725",
"text": "Monte Carlo tree search (MCTS) is extremely popular in computer Go which determines each action by enormous simulations in a broad and deep search tree. However, human experts select most actions by pattern analysis and careful evaluation rather than brute search of millions of future interactions. In this paper, we propose a computer Go system that follows experts way of thinking and playing. Our system consists of two parts. The first part is a novel deep alternative neural network (DANN) used to generate candidates of next move. Compared with existing deep convolutional neural network (DCNN), DANN inserts recurrent layer after each convolutional layer and stacks them in an alternative manner. We show such setting can preserve more contexts of local features and its evolutions which are beneficial for move prediction. The second part is a long-term evaluation (LTE) module used to provide a reliable evaluation of candidates rather than a single probability from move predictor. This is consistent with human experts nature of playing since they can foresee tens of steps to give an accurate estimation of candidates. In our system, for each candidate, LTE calculates a cumulative reward after several future interactions when local variations are settled. Combining criteria from the two parts, our system determines the optimal choice of next move. For more comprehensive experiments, we introduce a new professional Go dataset (PGD), consisting of 253, 233 professional records. Experiments on GoGoD and PGD datasets show the DANN can substantially improve performance of move prediction over pure DCNN. When combining LTE, our system outperforms most relevant approaches and open engines based on",
"title": ""
},
{
"docid": "b3556499bf5d788de7c4d46100ac3a9f",
"text": "Reuse has been proposed as a microarchitecture-level mechanism to reduce the amount of executed instructions, collapsing dependencies and freeing resources for other instructions. Previous works have used reuse domains such as memory accesses, integer or not floating point, based on the reusability rate. However, these works have not studied the specific contribution of reusing different subsets of instructions for performance. In this work, we analysed the sensitivity of trace reuse to instruction subsets, comparing their efficiency to their complementary subsets. We also studied the amount of reuse that can be extracted from loops. Our experiments show that disabling trace reuse outside loops does not harm performance but reduces in 12% the number of accesses to the reuse table. Our experiments with reuse subsets show that most of the speedup can be retained even when not reusing all types of instructions previously found in the reuse domain. 1 ar X iv :1 71 1. 06 67 2v 1 [ cs .A R ] 1 7 N ov 2 01 7",
"title": ""
},
{
"docid": "8921cffb633b0ea350b88a57ef0d4437",
"text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.",
"title": ""
}
] |
scidocsrr
|
4d97dc47536dbc6f296ac0e89fb309cf
|
An open-source navigation system for micro aerial vehicles
|
[
{
"docid": "c12d534d219e3d249ba3da1c0956c540",
"text": "Within the research on Micro Aerial Vehicles (MAVs), the field on flight control and autonomous mission execution is one of the most active. A crucial point is the localization of the vehicle, which is especially difficult in unknown, GPS-denied environments. This paper presents a novel vision based approach, where the vehicle is localized using a downward looking monocular camera. A state-of-the-art visual SLAM algorithm tracks the pose of the camera, while, simultaneously, building an incremental map of the surrounding region. Based on this pose estimation a LQG/LTR based controller stabilizes the vehicle at a desired setpoint, making simple maneuvers possible like take-off, hovering, setpoint following or landing. Experimental data show that this approach efficiently controls a helicopter while navigating through an unknown and unstructured environment. To the best of our knowledge, this is the first work describing a micro aerial vehicle able to navigate through an unexplored environment (independently of any external aid like GPS or artificial beacons), which uses a single camera as only exteroceptive sensor.",
"title": ""
},
{
"docid": "cff9a7f38ca6699b235c774232a56f54",
"text": "This paper presents a Miniature Aerial Vehicle (MAV) capable of handsoff autonomous operation within indoor environments. Our prototype is a Quadrotor weighing approximately 600g, with a diameter of 550mm, which carries the necessary electronics for stability control, altitude control, collision avoidance and anti-drift control. This MAV is equipped with three rate gyroscopes, three accelerometers, one ultrasonic sensor, four infrared sensors, a high-speed motor controller and a flight computer. Autonomous flight tests have been carried out in a 7x6-m room.",
"title": ""
}
] |
[
{
"docid": "14bb62c02192f837303dcc2e327475a6",
"text": "In this paper, we have proposed three kinds of network security situation awareness (NSSA) models. In the era of big data, the traditional NSSA methods cannot analyze the problem effectively. Therefore, the three models are designed for big data. The structure of these models are very large, and they are integrated into the distributed platform. Each model includes three modules: network security situation detection (NSSD), network security situation understanding (NSSU), and network security situation projection (NSSP). Each module comprises different machine learning algorithms to realize different functions. We conducted a comprehensive study of the safety of these models. Three models compared with each other. The experimental results show that these models can improve the efficiency and accuracy of data processing when dealing with different problems. Each model has its own advantages and disadvantages.",
"title": ""
},
{
"docid": "b1ef897890df4c719d85dd339f8dee70",
"text": "Repositories of health records are collections of events with varying number and sparsity of occurrences within and among patients. Although a large number of predictive models have been proposed in the last decade, they are not yet able to simultaneously capture cross-attribute and temporal dependencies associated with these repositories. Two major streams of predictive models can be found. On one hand, deterministic models rely on compact subsets of discriminative events to anticipate medical conditions. On the other hand, generative models offer a more complete and noise-tolerant view based on the likelihood of the testing arrangements of events to discriminate a particular outcome. However, despite the relevance of generative predictive models, they are not easily extensible to deal with complex grids of events. In this work, we rely on the Markov assumption to propose new predictive models able to deal with cross-attribute and temporal dependencies. Experimental results hold evidence for the utility and superior accuracy of generative models to anticipate health conditions, such as the need for surgeries. Additionally, we show that the proposed generative models are able to decode temporal patterns of interest (from the learned lattices) with acceptable completeness and precision levels, and with superior efficiency for voluminous repositories.",
"title": ""
},
{
"docid": "9164bd704cdb8ca76d0b5f7acda9d4ef",
"text": "In this paper we present a deep neural network topology that incorporates a simple to implement transformationinvariant pooling operator (TI-POOLING). This operator is able to efficiently handle prior knowledge on nuisance variations in the data, such as rotation or scale changes. Most current methods usually make use of dataset augmentation to address this issue, but this requires larger number of model parameters and more training data, and results in significantly increased training time and larger chance of under-or overfitting. The main reason for these drawbacks is that that the learned model needs to capture adequate features for all the possible transformations of the input. On the other hand, we formulate features in convolutional neural networks to be transformation-invariant. We achieve that using parallel siamese architectures for the considered transformation set and applying the TI-POOLING operator on their outputs before the fully-connected layers. We show that this topology internally finds the most optimal \"canonical\" instance of the input image for training and therefore limits the redundancy in learned features. This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.",
"title": ""
},
{
"docid": "f5cf9268c2d3ddf04d840f5f1b68f238",
"text": "The ribosomal uL10 protein, formerly known as P0, is an essential element of the ribosomal GTPase-associated center responsible for the interplay with translational factors during various stages of protein synthesis. In eukaryotic cells, uL10 binds two P1/P2 protein heterodimers to form a pentameric P-stalk, described as uL10-(P1-P2)2, which represents the functional form of these proteins on translating ribosomes. Unlike most ribosomal proteins, which are incorporated into pre-ribosomal particles during early steps of ribosome biogenesis in the nucleus, P-stalk proteins are attached to the 60S subunit in the cytoplasm. Although the primary role of the P-stalk is related to the process of translation, other extraribosomal functions of its constituents have been proposed, especially for the uL10 protein; however, the list of its activities beyond the ribosome is still an open question. Here, by the combination of biochemical and advanced fluorescence microscopy techniques, we demonstrate that upon nucleolar stress induction the uL10 protein accumulates in the cytoplasm of mammalian cells as a free, ribosome-unbound protein. Importantly, using a novel approach, FRAP-AC (FRAP after photoConversion), we have shown that the ribosome-free pool of uL10 represents a population of proteins released from pre-existing ribosomes. Taken together, our data indicate that the presence of uL10 on the ribosomes is affected in stressed cells, thus it might be considered as a regulatory element responding to environmental fluctuations.",
"title": ""
},
{
"docid": "f68f259523b2ec08448de3c0f9d7d23a",
"text": "A comprehensive computational fluid-dynamics-based study of a pleated wing section based on the wing of Aeshna cyanea has been performed at ultra-low Reynolds numbers corresponding to the gliding flight of these dragonflies. In addition to the pleated wing, simulations have also been carried out for its smoothed counterpart (called the 'profiled' airfoil) and a flat plate in order to better understand the aerodynamic performance of the pleated wing. The simulations employ a sharp interface Cartesian-grid-based immersed boundary method, and a detailed critical assessment of the computed results was performed giving a high measure of confidence in the fidelity of the current simulations. The simulations demonstrate that the pleated airfoil produces comparable and at times higher lift than the profiled airfoil, with a drag comparable to that of its profiled counterpart. The higher lift and moderate drag associated with the pleated airfoil lead to an aerodynamic performance that is at least equivalent to and sometimes better than the profiled airfoil. The primary cause for the reduction in the overall drag of the pleated airfoil is the negative shear drag produced by the recirculation zones which form within the pleats. The current numerical simulations therefore clearly demonstrate that the pleated wing is an ingenious design of nature, which at times surpasses the aerodynamic performance of a more conventional smooth airfoil as well as that of a flat plate. For this reason, the pleated airfoil is an excellent candidate for a fixed wing micro-aerial vehicle design.",
"title": ""
},
{
"docid": "da61524899080951ea8453e7bb7c5ec6",
"text": "StressSense is smart clothing made of fabric sensors that monitor the stress level of the wearers. The fabric sensors are comfortable, allowing for long periods of monitoring and the electronic components are waterproof and detachable for ease of care. This design project is expected to be beneficial for people who have a lot of stress in their daily life and who care about their mental health. It can be also used for people who need to control their stress level critically, such as analysts, stock managers, athletes, and patients with chronic diseases and disorders.",
"title": ""
},
{
"docid": "31f1079ac79278eaf5fbcd5ef11482e7",
"text": "Data from two studies describe the development of an implicit measure of humility and support the idea that dispositional humility is a positive quality with possible benefits. In Study 1, 135 college students completed Humility and Self-Esteem Implicit Association Tests (IATs) and several self-report measures of personality self-concept. Fifty-four participants also completed the Humility IAT again approximately 2 weeks later and their humility was rated by close acquaintances. The Humility IAT was found to be internally and temporally consistent. Implicit humility correlated with self-reported humility relative to arrogance, implicit self-esteem, and narcissism (inversely). Humility was not associated with self-reported low selfesteem, pessimism, or depression. In fact, self-reported humility relative to arrogance correlated positively with self-reported self-esteem, gratitude, forgiveness, spirituality, and general health. In addition, self-reported humility and acquaintancerated humility correlated positively; however, implicit humility and acquaintance-rated humility were not strongly associated. In Study 2, to examine the idea that humility might be associated with increased academic performance, we examined actual course grades of 55 college students who completed Humility and Self-Esteem IATs. Implicit humility correlated positively with higher actual course grades when narcissism, conscientiousness, and implicit self-esteem were simultaneously controlled. Implications and future research directions are discussed.",
"title": ""
},
{
"docid": "7dcc7cdff8a9196c716add8a1faf0203",
"text": "Power modulators for compact, repetitive systems are continually faced with new requirements as the corresponding system objectives increase. Changes in pulse rate frequency or number of pulses significantly impact the design of the power conditioning system. In order to meet future power supply requirements, we have developed several high voltage (HV) capacitor charging power supplies (CCPS). This effort focuses on a volume of 6\" x 6\" x 14\" and a weight of 25 lbs. The primary focus was to increase the effective capacitor charge rate, or power output, for the given size and weight. Although increased power output was the principal objective, efficiency and repeatability were also considered. A number of DC-DC converter topologies were compared to determine the optimal design. In order to push the limits of output power, numerous resonant converter parameters were examined. Comparisons of numerous topologies, HV transformers and rectifiers, and switching frequency ranges are presented. The impacts of the control system and integration requirements are also considered.",
"title": ""
},
{
"docid": "c182bef2a20bb9c13d0b2b89e7adf5ce",
"text": "Endocannabinoids are neuromodulators that act as retrograde synaptic messengers inhibiting the release of different neurotransmitters in cerebral areas such as hippocampus, cortex, and striatum. However, little is known about other roles of the endocannabinoid system in brain. In the present work we provide substantial evidence that the endocannabinoid anandamide (AEA) regulates neuronal differentiation both in culture and in vivo. Thus AEA, through the CB(1) receptor, inhibited cortical neuron progenitor differentiation to mature neuronal phenotype. In addition, human neural stem cell differentiation and nerve growth factor-induced PC12 cell differentiation were also inhibited by cannabinoid challenge. AEA decreased PC12 neuronal-like generation via CB(1)-mediated inhibition of sustained extracellular signal-regulated kinase (ERK) activation, which is responsible for nerve growth factor action. AEA thus inhibited TrkA-induced Rap1/B-Raf/ERK activation. Finally, immunohistochemical analyses by confocal microscopy revealed that adult neurogenesis in dentate gyrus was significantly decreased by the AEA analogue methanandamide and increased by the CB(1) antagonist SR141716. These data indicate that endocannabinoids inhibit neuronal progenitor cell differentiation through attenuation of the ERK pathway and suggest that they constitute a new physiological system involved in the regulation of neurogenesis.",
"title": ""
},
{
"docid": "115fab034391b2003dc0365460f5bbf1",
"text": "Polymyalgia rheumatica (PMR) is a chronic inflammatory disorder of unknown cause characterised by the subacute onset of shoulder and pelvic girdle pain, and early morning stiffness in men and women over the age of 50 years. Due to the lack of a gold standard investigation, diagnosis is based on a clinical construct and laboratory evidence of inflammation. Heterogeneity in the clinical presentation and disease course of PMR has long been recognised. Aside from the evolution of alternative diagnoses, such as late-onset rheumatoid arthritis, concomitant giant cell arteritis is also recognised in 16-21% of cases. In 2012, revised classification criteria were released by the European League Against Rheumatism and American College of Rheumatology in order to identify a more homogeneous population upon which future studies could be based. In this article, we aim to provide an updated perspective on the pathogenesis and diagnosis of PMR, with particular focus on imaging modalities, such as ultrasound and whole body positron emission tomography/computed tomography, which have advanced our current understanding of this disease. Future treatment directions, based on recognition of the key cytokines involved in PMR, will also be explored.",
"title": ""
},
{
"docid": "a239e75cb06355884f65f041e215b902",
"text": "BACKGROUND\nNecrotizing enterocolitis (NEC) and nosocomial sepsis are associated with increased morbidity and mortality in preterm infants. Through prevention of bacterial migration across the mucosa, competitive exclusion of pathogenic bacteria, and enhancing the immune responses of the host, prophylactic enteral probiotics (live microbial supplements) may play a role in reducing NEC and associated morbidity.\n\n\nOBJECTIVES\nTo compare the efficacy and safety of prophylactic enteral probiotics administration versus placebo or no treatment in the prevention of severe NEC and/or sepsis in preterm infants.\n\n\nSEARCH STRATEGY\nFor this update, searches were made of MEDLINE (1966 to October 2010), EMBASE (1980 to October 2010), the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, Issue 2, 2010), and abstracts of annual meetings of the Society for Pediatric Research (1995 to 2010).\n\n\nSELECTION CRITERIA\nOnly randomized or quasi-randomized controlled trials that enrolled preterm infants < 37 weeks gestational age and/or < 2500 g birth weight were considered. Trials were included if they involved enteral administration of any live microbial supplement (probiotics) and measured at least one prespecified clinical outcome.\n\n\nDATA COLLECTION AND ANALYSIS\nStandard methods of the Cochrane Collaboration and its Neonatal Group were used to assess the methodologic quality of the trials, data collection and analysis.\n\n\nMAIN RESULTS\nSixteen eligible trials randomizing 2842 infants were included. Included trials were highly variable with regard to enrollment criteria (i.e. birth weight and gestational age), baseline risk of NEC in the control groups, timing, dose, formulation of the probiotics, and feeding regimens. Data regarding extremely low birth weight infants (ELBW) could not be extrapolated. In a meta-analysis of trial data, enteral probiotics supplementation significantly reduced the incidence of severe NEC (stage II or more) (typical RR 0.35, 95% CI 0.24 to 0.52) and mortality (typical RR 0.40, 95% CI 0.27 to 0.60). There was no evidence of significant reduction of nosocomial sepsis (typical RR 0.90, 95% CI 0.76 to 1.07). The included trials reported no systemic infection with the probiotics supplemental organism. The statistical test of heterogeneity for NEC, mortality and sepsis was insignificant.\n\n\nAUTHORS' CONCLUSIONS\nEnteral supplementation of probiotics prevents severe NEC and all cause mortality in preterm infants. Our updated review of available evidence supports a change in practice. More studies are needed to assess efficacy in ELBW infants and assess the most effective formulation and dose to be utilized.",
"title": ""
},
{
"docid": "06a1d90991c5a9039c6758a66205e446",
"text": "In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression. We hypothesize that syntactic information helps in making such models more robust across domains. We propose two major changes to the model: using explicit syntactic features and introducing syntactic constraints through Integer Linear Programming (ILP). Our evaluation shows that the proposed model works better than the original model as well as a traditional non-neural-network-based model in a cross-domain setting.",
"title": ""
},
{
"docid": "a1915a869616b9c8c2547f66ec89de13",
"text": "The harvest yield in vineyards can vary significantly from year to year and also spatially within plots due to variations in climate, soil conditions and pests. Fine grained knowledge of crop yields can allow viticulturists to better manage their vineyards. The current industry practice for yield prediction is destructive, expensive and spatially sparse - during the growing season sparse samples are taken and extrapolated to determine overall yield. We present an automated method that uses computer vision to detect and count grape berries. The method could potentially be deployed across large vineyards taking measurements at every vine in a non-destructive manner. Our berry detection uses both shape and visual texture and we can demonstrate detection of green berries against a green leaf background. Berry detections are counted and the eventual harvest yield is predicted. Results are presented for 224 vines (over 450 meters) of two different grape varieties and compared against the actual harvest yield as groundtruth. We calibrate our berry count to yield and find that we can predict yield of individual vineyard rows to within 9.8% of actual crop weight.",
"title": ""
},
{
"docid": "72b080856124d39b62d531cb52337ce9",
"text": "Experimental and clinical studies have identified a crucial role of microcirculation impairment in severe infections. We hypothesized that mottling, a sign of microcirculation alterations, was correlated to survival during septic shock. We conducted a prospective observational study in a tertiary teaching hospital. All consecutive patients with septic shock were included during a 7-month period. After initial resuscitation, we recorded hemodynamic parameters and analyzed their predictive value on mortality. The mottling score (from 0 to 5), based on mottling area extension from the knees to the periphery, was very reproducible, with an excellent agreement between independent observers [kappa = 0.87, 95% CI (0.72–0.97)]. Sixty patients were included. The SOFA score was 11.5 (8.5–14.5), SAPS II was 59 (45–71) and the 14-day mortality rate 45% [95% CI (33–58)]. Six hours after inclusion, oliguria [OR 10.8 95% CI (2.9, 52.8), p = 0.001], arterial lactate level [<1.5 OR 1; between 1.5 and 3 OR 3.8 (0.7–29.5); >3 OR 9.6 (2.1–70.6), p = 0.01] and mottling score [score 0–1 OR 1; score 2–3 OR 16, 95% CI (4–81); score 4–5 OR 74, 95% CI (11–1,568), p < 0.0001] were strongly associated with 14-day mortality, whereas the mean arterial pressure, central venous pressure and cardiac index were not. The higher the mottling score was, the earlier death occurred (p < 0.0001). Patients whose mottling score decreased during the resuscitation period had a better prognosis (14-day mortality 77 vs. 12%, p = 0.0005). The mottling score is reproducible and easy to evaluate at the bedside. The mottling score as well as its variation during resuscitation is a strong predictor of 14-day survival in patients with septic shock.",
"title": ""
},
{
"docid": "ad2655aaed8a4f3379cb206c6e405f16",
"text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.",
"title": ""
},
{
"docid": "a423435c1dc21c33b93a262fa175f5c5",
"text": "The study investigated several teacher characteristics, with a focus on two measures of teaching experience, and their association with second grade student achievement gains in low performing, high poverty schools in a Mid-Atlantic state. Value-added models using three-level hierarchical linear modeling were used to analyze the data from 1,544 students, 154 teachers, and 53 schools. Results indicated that traditional teacher qualification characteristics such as licensing status and educational attainment were not statistically significant in producing student achievement gains. Total years of teaching experience was also not a significant predictor but a more specific measure, years of teaching experience at a particular grade level, was significantly associated with increased student reading achievement. We caution researchers and policymakers when interpreting results from studies that have used only a general measure of teacher experience as effects are possibly underestimated. Policy implications are discussed.",
"title": ""
},
{
"docid": "739aaf487d6c5a7b7fe9d0157d530382",
"text": "A blockchain framework is presented for addressing the privacy and security challenges associated with the Big Data in smart mobility. It is composed of individuals, companies, government and universities where all the participants collect, own, and control their data. Each participant shares their encrypted data to the blockchain network and can make information transactions with other participants as long as both party agrees to the transaction rules (smart contract) issued by the owner of the data. Data ownership, transparency, auditability and access control are the core principles of the proposed blockchain for smart mobility Big Data.",
"title": ""
},
{
"docid": "e3acdb12bf902aeee1d6619fd1bd13cc",
"text": "The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.",
"title": ""
},
{
"docid": "833786dcf2288f21343d60108819fe49",
"text": "This paper describes an audio event detection system which automatically classifies an audio event as ambient noise, scream or gunshot. The classification system uses two parallel GMM classifiers for discriminating screams from noise and gunshots from noise. Each classifier is trained using different features, appropriately chosen from a set of 47 audio features, which are selected according to a 2-step process. First, feature subsets of increasing size are assembled using filter selection heuristics. Then, a classifier is trained and tested with each feature subset. The obtained classification performance is used to determine the optimal feature vector dimension. This allows a noticeable speed-up w.r.t. wrapper feature selection methods. In order to validate the proposed detection algorithm, we carried out extensive experiments on a rich set of gunshots and screams mixed with ambient noise at different SNRs. Our results demonstrate that the system is able to guarantee a precision of 90% at a false rejection rate of 8%.",
"title": ""
}
] |
scidocsrr
|
3be7c032174e7c0804da33209d27ac8d
|
No Dutch Book can be built against the TBM even though update is not obtained by Bayes rule of conditioning
|
[
{
"docid": "7cf625ce06d335d7758c868514b4c635",
"text": "Jeffrey's rule of conditioning has been proposed in order to revise a probability measure by another probability function. We generalize it within the framework of the models based on belief functions. We show that several forms of Jeffrey's conditionings can be defined that correspond to the geometrical rule of conditioning and to Dempster's rule of conditioning, respectively. 1. Jeffrey's rule in probability theory. In probability theory conditioning on an event . is classically obtained by the application of Bayes' rule. Let (Q, � , P) be a probability space where P(A) is the probability of the event Ae � where� is a Boolean algebra defined on a finite2 set n. P(A) quantified the degree of belief or the objective probability, depending on the interpretation given to the probability measure, that a particular arbitrary element m of n which is not a priori located in any of the sets of� belongs to a particular set Ae�. Suppose it is known that m belongs to Be� and P(B)>O. The probability measure P must be updated into PB that quantifies the same event as previously but after taking in due consideration the know ledge that me B. PB is obtained by Bayes' rule of conditioning: This rule can be obtained by requiring that: 81: VBE�. PB(B) = 1 82: VBe�, VX,Ye� such that X.Y�B. and PJ3(X) _ P(X) PB(Y)P(Y) PB(Y) = 0 ifP(Y)>O",
"title": ""
}
] |
[
{
"docid": "ee6612fa13482f7e3bbc7241b9e22297",
"text": "The MOND limit is shown to follow from a requirement of space-time scale invariance of the equations of motion for nonrelativistic, purely gravitational systems; i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the asymptotic behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results–asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.–follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of \" zero mass \" : Rest masses, whose presence obstructs scaling symmetry, become negligible compared to the \" phantom \" , dynamical masses–those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection, in particular the possibility that MOND-especially the deep-MOND limit– is related to the asymptotic de Sitter geometry of our universe. I point out, in this connection, the possible relevance of a (classical) de Sitter-conformal-field-theory (dS/CFT) correspondence.",
"title": ""
},
{
"docid": "7974d0299ffcca73bb425fb72f463429",
"text": "The development of human gut microbiota begins as soon as the neonate leaves the protective environment of the uterus (or maybe in-utero) and is exposed to innumerable microorganisms from the mother as well as the surrounding environment. Concurrently, the host responses to these microbes during early life manifest during the development of an otherwise hitherto immature immune system. The human gut microbiome, which comprises an extremely diverse and complex community of microorganisms inhabiting the intestinal tract, keeps on fluctuating during different stages of life. While these deviations are largely natural, inevitable and benign, recent studies show that unsolicited perturbations in gut microbiota configuration could have strong impact on several features of host health and disease. Our microbiota undergoes the most prominent deviations during infancy and old age and, interestingly, our immune health is also in its weakest and most unstable state during these two critical stages of life, indicating that our microbiota and health develop and age hand-in-hand. However, the mechanisms underlying these interactions are only now beginning to be revealed. The present review summarizes the evidences related to the age-associated changes in intestinal microbiota and vice-versa, mechanisms involved in this bi-directional relationship, and the prospective for development of microbiota-based interventions such as probiotics for healthy aging.",
"title": ""
},
{
"docid": "8e8f3d504bdeb2b6c4b86999df3ece67",
"text": "Software released in binary form frequently uses third-party packages without respecting their licensing terms. For instance, many consumer devices have firmware containing the Linux kernel, without the suppliers following the requirements of the GNU General Public License. Such license violations are often accidental, e.g., when vendors receive binary code from their suppliers with no indication of its provenance. To help find such violations, we have developed the Binary Analysis Tool (BAT), a system for code clone detection in binaries. Given a binary, such as a firmware image, it attempts to detect cloning of code from repositories of packages in source and binary form. We evaluate and compare the effectiveness of three of BAT's clone detection techniques: scanning for string literals, detecting similarity through data compression, and detecting similarity by computing binary deltas.",
"title": ""
},
{
"docid": "767e133857d336e73d04e0ae5e924283",
"text": "OVERVIEW 376 THEORETICAL PERSPECTIVES ON MEDIA USE AND EFFECTS 377 Social Cognitive Theory 377 Parasocial Relationships and Parasocial Interactions 377 Cognitive Approaches 378 The Cultivation Hypothesis 378 Uses and Gratification Theory 379 Arousal Theory 379 Psychoanalytic Theory 379 Behaviorism and Classical Conditioning 379 Summary 380 THE HISTORY AND EVOLUTION OF MEDIA PLATFORMS 380 THE ECOLOGY OF THE DIGITALWORLD 381 Media Access 381 Defining Media Exposure 382 Measuring Media Use and Exposure 383 THE DISAPPEARANCE OF QUIET ENVIRONMENTS 386 Media, Imaginative Play, Creativity, and Daydreaming 386 Media and Sleep Patterns 388 Media and Concentration 389 THE SOCIAL NATURE OF MEDIA ENVIRONMENTS: ELECTRONIC FRIENDS AND COMMUNICATIONS 389 Prosocial Media: “It’s a Beautiful Day in the Neighborhood” 390 Parasocial Relationships With Media Characters 392 Social Media: Being and Staying Connected 393 THE MEAN AND SCARY WORLD: MEDIA VIOLENCE AND SCARY CONTENT 394 Media Violence 394 Children’s Fright Reactions to Scary Media Content 396 MEDIA, GENDER, AND SEXUALITY 397 Gender-Stereotyped Content 397 Influences of Media on Gender-Related Processing and Outcomes 398 Sexual Content 400 Influences of Sexual Content on Children 400 FROM OUTDOOR TO INDOOR ENVIRONMENTS: THE OBESITY EPIDEMIC 401 The Content of Food and Beverage Advertisements 402 Energy Intake: Media Influences on Children’s Diets and Health Outcomes 402 Media-Related Caloric Expenditure 403 Summary 403 RISKY MEDIA ENVIRONMENTS: ALCOHOL, TOBACCO, AND ILLEGAL DRUGS 404 The Content: Exposure to Risky Behaviors 404 Influences of Exposure to Alcohol, Tobacco, and Illegal Drugs on Children 404 MEDIA POLICY 404 Early Media Exposure 405 The V-Chip 405 Media Violence 405 Regulating Sexual Content 405 The Commercialization of Childhood 406 Driving Hazards 407 The Children’s Television Act 407 CONCLUSIONS 407 REFERENCES 408",
"title": ""
},
{
"docid": "bb95c0246cbd1238ad4759f488763c37",
"text": "The massive scale of future wireless networks will cause computational bottlenecks in performance optimization. In this paper, we study the problem of connecting mobile traffic to Cloud RAN (C-RAN) stations. To balance station load, we steer the traffic by designing device association rules. The baseline association rule connects each device to the station with the strongest signal, which does not account for interference or traffic hot spots, and leads to load imbalances and performance deterioration. Instead, we can formulate an optimization problem to decide centrally the best association rule at each time instance. However, in practice this optimization has such high dimensions, that even linear programming solvers fail to solve. To address the challenge of massive connectivity, we propose an approach based on the theory of optimal transport, which studies the economical transfer of probability between two distributions. Our proposed methodology can further inspire scalable algorithms for massive optimization problems in wireless networks.",
"title": ""
},
{
"docid": "f2274a04e0a54fb5a46e2be99863d9ac",
"text": "I find that dialysis providers in the United States exercise market power by reducing the clinical quality, or dose, of dialysis treatment. This market power stems from two sources. The first is a spatial dimension—patients face high travel costs and traveling farther for quality is undesirable. The second source is congestion—technological constraints may require dialysis capacity to be rationed among patients. Both of these sources of market power should be considered when developing policies aimed at improving quality or access in this industry. To this end, I develop and estimate an entry game with quality competition where providers choose both capacity and quality. Increasing the Medicare reimbursement rate for dialysis or subsidizing entry result in increased entry and improved quality for patients. However, these policies are extremely costly because providers are able to capture 84 to 97 percent of the additional surplus, leaving very little pass-through to consumers. Policies targeting the sources of market power provide a cost effective way of improving quality by enhancing competition and forcing providers to give up producer surplus. For example, I find that a program subsidizing patient travel costs $373 million, increases consumer surplus by $440 million, and reduces the mortality rate by 3 percent. ∗I thank my advisers Allan Collard-Wexler, Pat Bayer, Ryan McDevitt, James Roberts, and Chris Timmins for their extensive comments, guidance, and support. I am also grateful to Peter Arcidiacono, Federico Bugni, David Ridley, Adam Rosen, John Singleton, Frank Sloan, Daniel Xu and seminar participants at Duke, the International Industrial Organization Conference, and the Applied Micro Workshop at the Federal Reserve Board. †[email protected]",
"title": ""
},
{
"docid": "59bd3e5db7291e43a8439e63d957aa31",
"text": "Semi-supervised classifier design that simultaneously utilizes both labeled and unlabeled samples is a major research issue in machine learning. Existing semisupervised learning methods belong to either generative or discriminative approaches. This paper focuses on probabilistic semi-supervised classifier design and presents a hybrid approach to take advantage of the generative and discriminative approaches. Our formulation considers a generative model trained on labeled samples and a newly introduced bias correction model. Both models belong to the same model family. The proposed hybrid model is constructed by combining both generative and bias correction models based on the maximum entropy principle. The parameters of the bias correction model are estimated by using training data, and combination weights are estimated so that labeled samples are correctly classified. We use naive Bayes models as the generative models to apply the hybrid approach to text classification problems. In our experimental results on three text data sets, we confirmed that the proposed method significantly outperformed pure generative and discriminative methods when the classification performances of the both methods were comparable.",
"title": ""
},
{
"docid": "681360f20a662f439afaaa022079f7c0",
"text": "We present a multi-PC/camera system that can perform 3D reconstruction and ellipsoids fitting of moving humans in real time. The system consists of five cameras. Each camera is connected to a PC which locally extracts the silhouettes of the moving person in the image captured by the camera. The five silhouette images are then sent, via local network, to a host computer to perform 3D voxel-based reconstruction by an algorithm called SPOT. Ellipsoids are then used to fit the reconstructed data. By using a simple and user-friendly interface, the user can display and observe, in real time and from any view-point, the 3D models of the moving human body. With a rate of higher than 15 frames per second, the system is able to capture nonintrusively sequence of human motions.",
"title": ""
},
{
"docid": "d0da33c18339070575bf1244e93c81fe",
"text": "The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments (single factor or factorial designs), A/B tests (and their generalizations), split tests, Control/Treatment tests, and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person's Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.",
"title": ""
},
{
"docid": "a0c42d2b0ffd4a784c016663dfb6bb4e",
"text": "College of Information and Electrical Engineering, China Agricultural University, Beijing, China Abstract. This paper presents a system framework taking the advantages of the WSN for the real-time monitoring on the water quality in aquaculture. We design the structure of the wireless sensor network to collect and continuously transmit data to the monitoring software. Then we accomplish the configuration model in the software that enhances the reuse and facility of the monitoring project. Moreover, the monitoring software developed to represent the monitoring hardware and data visualization, and analyze the data with expert knowledge to implement the auto control. The monitoring system has been realization of the digital, intelligent, and effectively ensures the quality of aquaculture water. Practical deployment results are to show the system reliability and real-time characteristics, and to display good effect on environmental monitoring of water quality.",
"title": ""
},
{
"docid": "423f246065662358b1590e8f59a2cc55",
"text": "Caused by the rising interest in traffic surveillance for simulations and decision management many publications concentrate on automatic vehicle detection or tracking. Quantities and velocities of different car classes form the data basis for almost every traffic model. Especially during mass events or disasters a wide-area traffic monitoring on demand is needed which can only be provided by airborne systems. This means a massive amount of image information to be handled. In this paper we present a combination of vehicle detection and tracking which is adapted to the special restrictions given on image size and flow but nevertheless yields reliable information about the traffic situation. Combining a set of modified edge filters it is possible to detect cars of different sizes and orientations with minimum computing effort, if some a priori information about the street network is used. The found vehicles are tracked between two consecutive images by an algorithm using Singular Value Decomposition. Concerning their distance and correlation the features are assigned pairwise with respect to their global positioning among each other. Choosing only the best correlating assignments it is possible to compute reliable values for the average velocities.",
"title": ""
},
{
"docid": "28a481f51a7d673d1acb396d8b9c25fb",
"text": "This study investigated the combination of mothers' and fathers' parenting styles (affection, behavioral control, and psychological control) that would be most influential in predicting their children's internal and external problem behaviors. A total of 196 children (aged 5-6 years) were followed up six times from kindergarten to the second grade to measure their problem behaviors. Mothers and fathers filled in a questionnaire measuring their parenting styles once every year. The results showed that a high level of psychological control exercised by mothers combined with high affection predicted increases in the levels of both internal and external problem behaviors among children. Behavioral control exercised by mothers decreased children's external problem behavior but only when combined with a low level of psychological control.",
"title": ""
},
{
"docid": "da63f023a1fd1f646deb5b2908e8634f",
"text": "This paper presents a new algorithm for smoothing 3D binary images in a topology preserving way. Our algorithm is a reduction operator: some border points that are considered as extremities are removed. The proposed method is composed of two parallel reduction operators. We are to apply our smoothing algorithm as an iterationby-iteration pruning for reducing the noise sensitivity of 3D parallel surface-thinning algorithms. An efficient implementation of our algorithm is sketched and its topological correctness for (26,6) pictures is proved.",
"title": ""
},
{
"docid": "53633432216e383297e401753332b00a",
"text": "Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock) has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR) at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR) is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears) rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right) precentral sulcus (lPCS), a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream. Results suggest that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity) help to explain why past ASSR studies of auditory spatial attention yield seemingly contradictory results.",
"title": ""
},
{
"docid": "c273620e05cc5131e8c6d58b700a0aab",
"text": "Differential evolution has been shown to be an effective methodology for solving optimization problems over continuous space. In this paper, we propose an eigenvector-based crossover operator. The proposed operator utilizes eigenvectors of covariance matrix of individual solutions, which makes the crossover rotationally invariant. More specifically, the donor vectors during crossover are modified, by projecting each donor vector onto the eigenvector basis that provides an alternative coordinate system. The proposed operator can be applied to any crossover strategy with minimal changes. The experimental results show that the proposed operator significantly improves DE performance on a set of 54 test functions in CEC 2011, BBOB 2012, and CEC 2013 benchmark sets.",
"title": ""
},
{
"docid": "08b5bff9f96619083c16607090311345",
"text": "This demo presents a prototype mobile app that provides out-of-the-box personalised content recommendations to its users by leveraging and combining the user's location, their Facebook and/or Twitter feed and their in-app actions to automatically infer their interests. We build individual models for each user and each location. At retrieval time we construct the user's personalised feed by mixing different sources of content-based recommendations with content directly from their Facebook/Twitter feeds, locally trending articles and content propagated through their in-app social network. Both explicit and implicit feedback signals from the users' interactions with their recommendations are used to update their interests models and to learn their preferences over the different content sources.",
"title": ""
},
{
"docid": "e67f95384ce816124648cdc33cd7091c",
"text": "A high-efficiency push-pull power amplifier has been designed and measured across a bandwidth of 250MHz to 3.1GHz. The output power was 46dBm with a drain efficiency of above 45% between 700MHz and 2GHz, with a minimum output power of 43dBm across the entire band. In addition, a minimum of 60% drain efficiency and 11dB transducer gain was measured between 350MHz and 1GHz. The design was realized using a coaxial cable transmission line balun, which provides a broadband 2∶1 impedance transformation ratio and reduces the need for bandwidth-limiting conventional matching. The combination of output power, bandwidth and efficiency are believed to be the best reported to date at these frequencies.",
"title": ""
},
{
"docid": "142c5598f0a8b95b5d4f3e5656a857a9",
"text": "Flavanols from chocolate appear to increase nitric oxide bioavailability, protect vascular endothelium, and decrease cardiovascular disease (CVD) risk factors. We sought to test the effect of flavanol-rich dark chocolate (FRDC) on endothelial function, insulin sensitivity, beta-cell function, and blood pressure (BP) in hypertensive patients with impaired glucose tolerance (IGT). After a run-in phase, 19 hypertensives with IGT (11 males, 8 females; 44.8 +/- 8.0 y) were randomized to receive isocalorically either FRDC or flavanol-free white chocolate (FFWC) at 100 g/d for 15 d. After a wash-out period, patients were switched to the other treatment. Clinical and 24-h ambulatory BP was determined by sphygmometry and oscillometry, respectively, flow-mediated dilation (FMD), oral glucose tolerance test, serum cholesterol and C-reactive protein, and plasma homocysteine were evaluated after each treatment phase. FRDC but not FFWC ingestion decreased insulin resistance (homeostasis model assessment of insulin resistance; P < 0.0001) and increased insulin sensitivity (quantitative insulin sensitivity check index, insulin sensitivity index (ISI), ISI(0); P < 0.05) and beta-cell function (corrected insulin response CIR(120); P = 0.035). Systolic (S) and diastolic (D) BP decreased (P < 0.0001) after FRDC (SBP, -3.82 +/- 2.40 mm Hg; DBP, -3.92 +/- 1.98 mm Hg; 24-h SBP, -4.52 +/- 3.94 mm Hg; 24-h DBP, -4.17 +/- 3.29 mm Hg) but not after FFWC. Further, FRDC increased FMD (P < 0.0001) and decreased total cholesterol (-6.5%; P < 0.0001), and LDL cholesterol (-7.5%; P < 0.0001). Changes in insulin sensitivity (Delta ISI - Delta FMD: r = 0.510, P = 0.001; Delta QUICKI - Delta FMD: r = 0.502, P = 0.001) and beta-cell function (Delta CIR(120) - Delta FMD: r = 0.400, P = 0.012) were directly correlated with increases in FMD and inversely correlated with decreases in BP (Delta ISI - Delta 24-h SBP: r = -0.368, P = 0.022; Delta ISI - Delta 24-h DBP r = -0.384, P = 0.017). Thus, FRDC ameliorated insulin sensitivity and beta-cell function, decreased BP, and increased FMD in IGT hypertensive patients. These findings suggest flavanol-rich, low-energy cocoa food products may have a positive impact on CVD risk factors.",
"title": ""
},
{
"docid": "2364fc795ff8e449a557eda4b498b42d",
"text": "With the increasing utilization and popularity of the cloud infrastructure, more and more data are moved to the cloud storage systems. This makes the availability of cloud storage services critically important, particularly given the fact that outages of cloud storage services have indeed happened from time to time. Thus, solely depending on a single cloud storage provider for storage services can risk violating the service-level agreement (SLA) due to the weakening of service availability. This has led to the notion of Cloud-of-Clouds, where data redundancy is introduced to distribute data among multiple independent cloud storage providers, to address the problem. The key in the effectiveness of the Cloud-of-Clouds approaches lies in how the data redundancy is incorporated and distributed among the clouds. However, the existing Cloud-of-Clouds approaches utilize either replication or erasure codes to redundantly distribute data across multiple clouds, thus incurring either high space or high performance overheads. In this paper, we propose a hybrid redundant data distribution approach, called HyRD, to improve the cloud storage availability in Cloud-of-Clouds by exploiting the workload characteristics and the diversity of cloud providers. In HyRD, large files are distributed in multiple cost-efficient cloud storage providers with erasure-coded data redundancy while small files and file system metadata are replicated on multiple high-performance cloud storage providers. The experiments conducted on our lightweight prototype implementation of HyRD show that HyRD improves the cost efficiency by 33.4 and 20.4 percent, and reduces the access latency by 58.7 and 34.8 percent than the DuraCloud and RACS schemes, respectively.",
"title": ""
},
{
"docid": "09c5bfd9c7fcd78f15db76e8894751de",
"text": "Recently, active suspension is gaining popularity in commercial automobiles. To develop the control methodologies for active suspension control, a quarter-car test bed was built employing a direct-drive tubular linear brushless permanent-magnet motor (LBPMM) as a force-generating component. Two accelerometers and a linear variable differential transformer (LVDT) are used in this quarter-car test bed. Three pulse-width-modulation (PWM) amplifiers supply the currents in three phases. Simulated road disturbance is generated by a rotating cam. Modified lead-lag control, linear-quadratic (LQ) servo control with a Kalman filter, fuzzy control methodologies were implemented for active-suspension control. In the case of fuzzy control, an asymmetric membership function was introduced to eliminate the DC offset in sensor data and to reduce the discrepancy in the models. This controller could attenuate road disturbance by up to 77% in the sprung mass velocity and 69% in acceleration. The velocity and the acceleration data of the sprung mass are presented to compare the controllers' performance in the ride comfort of a vehicle. Both simulation and experimental results are presented to demonstrate the effectiveness of these control methodologies.",
"title": ""
}
] |
scidocsrr
|
60b484e7e8d0bec73da1f9de98c17a78
|
Pushing XPath Accelerator to its Limits
|
[
{
"docid": "020545bf4a1050c8c45d5df57df2fed5",
"text": "Relational XQuery systems try to re-use mature relational data management infrastructures to create fast and scalable XML database technology. This paper describes the main features, key contributions, and lessons learned while implementing such a system. Its architecture consists of (i) a range-based encoding of XML documents into relational tables, (ii) a compilation technique that translates XQuery into a basic relational algebra, (iii) a restricted (order) property-aware peephole relational query optimization strategy, and (iv) a mapping from XML update statements into relational updates. Thus, this system implements all essential XML database functionalities (rather than a single feature) such that we can learn from the full consequences of our architectural decisions. While implementing this system, we had to extend the state-of-the-art with a number of new technical contributions, such as loop-lifted staircase join and efficient relational query evaluation strategies for XQuery theta-joins with existential semantics. These contributions as well as the architectural lessons learned are also deemed valuable for other relational back-end engines. The performance and scalability of the resulting system is evaluated on the XMark benchmark up to data sizes of 11GB. The performance section also provides an extensive benchmark comparison of all major XMark results published previously, which confirm that the goal of purely relational XQuery processing, namely speed and scalability, was met.",
"title": ""
}
] |
[
{
"docid": "355b712e6e97cb44dd20d53b627615da",
"text": "BACKGROUND\nLumbar stabilization exercises have gained popularity and credibility in patients with non-acute low back pain. Previous research provides more support to strength/resistance and coordination/stabilisation programs. Some authors also suggest adding strength/resistance training following motor control exercises. However, the effect of such a lumbar stabilization program on lumbar proprioception has never been tested so far. The present study investigated the effects of an 8-week stabilization exercise program on lumbar proprioception in patients with low back pain (LBP) and assessed the 8-week test-retest reliability of lumbar proprioception in control subjects.\n\n\nMETHODS\nLumbar proprioception was measured before and after an 8-week lumbar stabilization exercise program for patients with LBP. Control subjects participated in the same protocol but received no treatment.\n\n\nRESULTS\nThe lumbar proprioception measure showed moderate reliability. Patients with LBP and control subjects demonstrated no differences in lumbar proprioception at baseline. Participants from both groups showed better proprioception following the 8-week interval, demonstrating the presence of learning between testing days.\n\n\nCONCLUSIONS\nThe improvement of lumbar proprioception seen in both groups was ascribed to motor learning of the test itself. The effect of lumbar stabilization exercises on lumbar proprioception remains unknown because the LBP group did not show lumbar proprioception impairments.",
"title": ""
},
{
"docid": "18ba6afa8aa1a1e603d87085f9de9332",
"text": "A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of AI development, namely, (1) before it starts self-improvement, (2) during its takeoff, when it uses various instruments to escape its initial confinement, or (3) after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level.",
"title": ""
},
{
"docid": "f53198f035068a25e51d8a04d0206545",
"text": "This communication proposes a compact multiband antenna fed by microstrip coupling for handsets with conducting edge applications. The antenna is composed of a U-shaped loop, an inserted open-end T-shaped slot, and a feed line with a compact antenna area. Three types of resonant modes are excited, namely loop mode, slot mode, and monopole mode. Parametric studies have been performed. After the related parameters are controlled, the bandwidth of this antenna has the potential to cover the mobile bands of GSM (824-960 MHz), DCS (1710-1880 MHz), PCS (1850-1990 MHz), UMTS (1920-2170 MHz), LTE bands (FDD-LTE bands 1-10, 15, 16, 18-20, 22, 23, 25-27, and 30 and TDD-LTE bands 33-43). Good radiation characteristics, such as gain and radiation efficiency, are obtained with these operating bands. In addition, the structure of the inserted slot antenna reduced the user's hand effects on the radiation efficiency.",
"title": ""
},
{
"docid": "f14272db4779239dc7d392ef7dfac52d",
"text": "3 The Rotating Calipers Algorithm 3 3.1 Computing the Initial Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Updating the Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.1 Distinct Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.2 Duplicate Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.3 Multiple Polygon Edges Attain Minimum Angle . . . . . . . . . . . . . . . . . . . . . 8 3.2.4 The General Update Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10",
"title": ""
},
{
"docid": "a92efa40799017f16c9ae624b97d02aa",
"text": "BLEU is the de facto standard automatic evaluation metric in machine translation. While BLEU is undeniably useful, it has a number of limitations. Although it works well for large documents and multiple references, it is unreliable at the sentence or sub-sentence levels, and with a single reference. In this paper, we propose new variants of BLEU which address these limitations, resulting in a more flexible metric which is not only more reliable, but also allows for more accurate discriminative training. Our best metric has better correlation with human judgements than standard BLEU, despite using a simpler formulation. Moreover, these improvements carry over to a system tuned for our new metric.",
"title": ""
},
{
"docid": "fb5c9e78960ab840e423741059cbf8b8",
"text": "Text mining, also known as text data mining or knowledge discovery from textual databases, refers to the process of extracting interesting and non-trivial patterns or knowledge from text documents. Regarded by many as the next wave of knowledge discovery, text mining has very high commercial values. Last count reveals that there are more than ten high-tech companies offering products for text mining. Has text mining evolved so rapidly to become a mature field? This article attempts to shed some lights to the question. We first present a text mining framework consisting of two components: Text refining that transforms unstructured text documents into an intermediate form; and knowledge distillation that deduces patterns or knowledge from the intermediate form. We then survey the state-of-the-art text mining products/applications and align them based on the text refining and knowledge distillation functions as well as the intermediate form that they adopt. In conclusion, we highlight the upcoming challenges of text mining and the opportunities it offers.",
"title": ""
},
{
"docid": "a50ea2739751249e2832cae2df466d0b",
"text": "The Arabic Online Commentary (AOC) (Zaidan and Callison-Burch, 2011) is a large-scale repository of Arabic dialects with manual labels for 4 varieties of the language. Existing dialect identification models exploiting the dataset pre-date the recent boost deep learning brought to NLP and hence the data are not benchmarked for use with deep learning, nor is it clear how much neural networks can help tease the categories in the data apart. We treat these two limitations: We (1) benchmark the data, and (2) empirically test 6 different deep learning methods on the task, comparing peformance to several classical machine learning models under different conditions (i.e., both binary and multi-way classification). Our experimental results show that variants of (attention-based) bidirectional recurrent neural networks achieve best accuracy (acc) on the task, significantly outperforming all competitive baselines. On blind test data, our models reach 87.65% acc on the binary task (MSA vs. dialects), 87.4% acc on the 3-way dialect task (Egyptian vs. Gulf vs. Levantine), and 82.45% acc on the 4-way variants task (MSA vs. Egyptian vs. Gulf vs. Levantine). We release our benchmark for future work on the dataset.",
"title": ""
},
{
"docid": "d50d07954360c23bcbe3802776562f34",
"text": "A stationary display of white discs positioned on intersecting gray bars on a dark background gives rise to a striking scintillating effectthe scintillating grid illusion. The spatial and temporal properties of the illusion are well known, but a neuronal-level explanation of the mechanism has not been fully investigated. Motivated by the neurophysiology of the Limulus retina, we propose disinhibition and self-inhibition as possible neural mechanisms that may give rise to the illusion. In this letter, a spatiotemporal model of the early visual pathway is derived that explicitly accounts for these two mechanisms. The model successfully predicted the change of strength in the illusion under various stimulus conditions, indicating that low-level mechanisms may well explain the scintillating effect in the illusion.",
"title": ""
},
{
"docid": "eae04aa2942bfd3752fb596f645e2c2e",
"text": "PURPOSE\nHigh fasting blood glucose (FBG) can lead to chronic diseases such as diabetes mellitus, cardiovascular and kidney diseases. Consuming probiotics or synbiotics may improve FBG. A systematic review and meta-analysis of controlled trials was conducted to clarify the effect of probiotic and synbiotic consumption on FBG levels.\n\n\nMETHODS\nPubMed, Scopus, Cochrane Library, and Cumulative Index to Nursing and Allied Health Literature databases were searched for relevant studies based on eligibility criteria. Randomized or non-randomized controlled trials which investigated the efficacy of probiotics or synbiotics on the FBG of adults were included. Studies were excluded if they were review articles and study protocols, or if the supplement dosage was not clearly mentioned.\n\n\nRESULTS\nA total of fourteen studies (eighteen trials) were included in the analysis. Random-effects meta-analyses were conducted for the mean difference in FBG. Overall reduction in FBG observed from consumption of probiotics and synbiotics was borderline statistically significant (-0.18 mmol/L 95 % CI -0.37, 0.00; p = 0.05). Neither probiotic nor synbiotic subgroup analysis revealed a significant reduction in FBG. The result of subgroup analysis for baseline FBG level ≥7 mmol/L showed a reduction in FBG of 0.68 mmol/L (-1.07, -0.29; ρ < 0.01), while trials with multiple species of probiotics showed a more pronounced reduction of 0.31 mmol/L (-0.58, -0.03; ρ = 0.03) compared to single species trials.\n\n\nCONCLUSION\nThis meta-analysis suggests that probiotic and synbiotic supplementation may be beneficial in lowering FBG in adults with high baseline FBG (≥7 mmol/L) and that multispecies probiotics may have more impact on FBG than single species.",
"title": ""
},
{
"docid": "eacc2609b013e58f1a293a5c5f7da792",
"text": "BACKGROUND\nCompetency-based education (CBE) has emerged in the health professions to address criticisms of contemporary approaches to training. However, the literature has no clear, widely accepted definition of CBE that furthers innovation, debate, and scholarship in this area.\n\n\nAIM\nTo systematically review CBE-related literature in order to identify key terms and constructs to inform the development of a useful working definition of CBE for medical education.\n\n\nMETHODS\nWe searched electronic databases and supplemented searches by using authors' files, checking reference lists, contacting relevant organizations and conducting Internet searches. Screening was carried out by duplicate assessment, and disagreements were resolved by consensus. We included any English- or French-language sources that defined competency-based education. Data were analyzed qualitatively and summarized descriptively.\n\n\nRESULTS\nWe identified 15,956 records for initial relevancy screening by title and abstract. The full text of 1,826 records was then retrieved and assessed further for relevance. A total of 173 records were analyzed. We identified 4 major themes (organizing framework, rationale, contrast with time, and implementing CBE) and 6 sub-themes (outcomes defined, curriculum of competencies, demonstrable, assessment, learner-centred and societal needs). From these themes, a new definition of CBE was synthesized.\n\n\nCONCLUSION\nThis is the first comprehensive systematic review of the medical education literature related to CBE definitions. The themes and definition identified should be considered by educators to advance the field.",
"title": ""
},
{
"docid": "24ade252fcc6bd5404484cb9ad5987a3",
"text": "The cornerstone of the IBM System/360 philosophy is that the architecture of a computer is basically independent of its physical implementation. Therefore, in System/360, different physical implementations have been made of the single architectural definition which is illustrated in Figure 1.",
"title": ""
},
{
"docid": "7cfeadc550f412bb92df4f265bf99de0",
"text": "AIM\nCorrective image reconstruction methods which produce reconstructed images with improved spatial resolution and decreased noise level became recently commercially available. In this work, we tested the performance of three new software packages with reconstruction schemes recommended by the manufacturers using physical phantoms simulating realistic clinical settings.\n\n\nMETHODS\nA specially designed resolution phantom containing three (99m)Tc lines sources and the NEMA NU-2 image quality phantom were acquired on three different SPECT/CT systems (General Electrics Infinia, Philips BrightView and Siemens Symbia T6). Measurement of both phantoms was done with the trunk filled with a (99m)Tc-water solution. The projection data were reconstructed using the GE's Evolution for Bone(®), Philips Astonish(®) and Siemens Flash3D(®) software. The reconstruction parameters employed (number of iterations and subsets, the choice of post-filtering) followed theses recommendations of each vendor. These results were compared with reference reconstructions using the ordered subset expectation maximization (OSEM) reconstruction scheme.\n\n\nRESULTS\nThe best results (smallest value for resolution, highest percent contrast values) for all three packages were found for the scatter corrected data without applying any post-filtering. The advanced reconstruction methods improve the full width at half maximum (FWHM) of the line sources from 11.4 to 9.5mm (GE), from 9.1 to 6.4mm (Philips), and from 12.1 to 8.9 mm (Siemens) if no additional post filter was applied. The total image quality control index measured for a concentration ratio of 8:1 improves for GE from 147 to 189, from 179. to 325 for Philips and from 217 to 320 for Siemens using the reference method for comparison. The same trends can be observed for the 4:1 concentration ratio. The use of a post-filter reduces the background variability approximately by a factor of two, but deteriorates significantly the spatial resolution.\n\n\nCONCLUSIONS\nUsing advanced reconstruction algorithms the largest improvement in image resolution and contrast is found for the scatter corrected slices without applying post-filtering. The user has to choose whether noise reduction by post-filtering or improved image resolution fits better a particular imaging procedure.",
"title": ""
},
{
"docid": "7a5167ffb79f35e75359c979295c22ee",
"text": "Precise forecast of the electrical load plays a highly significant role in the electricity industry and market. It provides economic operations and effective future plans for the utilities and power system operators. Due to the intermittent and uncertain characteristic of the electrical load, many research studies have been directed to nonlinear prediction methods. In this paper, a hybrid prediction algorithm comprised of Support Vector Regression (SVR) and Modified Firefly Algorithm (MFA) is proposed to provide the short term electrical load forecast. The SVR models utilize the nonlinear mapping feature to deal with nonlinear regressions. However, such models suffer from a methodical algorithm for obtaining the appropriate model parameters. Therefore, in the proposed method the MFA is employed to obtain the SVR parameters accurately and effectively. In order to evaluate the efficiency of the proposed methodology, it is applied to the electrical load demand in Fars, Iran. The obtained results are compared with those obtained from the ARMA model, ANN, SVR-GA, SVR-HBMO, SVR-PSO and SVR-FA. The experimental results affirm that the proposed algorithm outperforms other techniques. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e170be2a81d853ee3d81a9dd45528a20",
"text": "Hardware improvement of cybernetic human HRP-4C for entertainment is presented in this paper. We coined the word “Cybernetic Human” to explain a humanoid robot with a realistic head and a realistic figure of a human being. HRP-4C stands for Humanoid Robotics Platform-4 (Cybernetic human). Its joints and dimensions conform to average values of young Japanese females and HRP-4C looks very human-like. We have made HRP-4C present in several events to search for a possibility of use in the entertainment industry. Based on feedback from our experience, we improved its hardware. The new hand, the new foot with active toe joint, and the new eye with camera are introduced.",
"title": ""
},
{
"docid": "1e139fa9673f83ac619a5da53391b1ef",
"text": "In this paper we propose a new no-reference (NR) image quality assessment (IQA) metric using the recently revealed free-energy-based brain theory and classical human visual system (HVS)-inspired features. The features used can be divided into three groups. The first involves the features inspired by the free energy principle and the structural degradation model. Furthermore, the free energy theory also reveals that the HVS always tries to infer the meaningful part from the visual stimuli. In terms of this finding, we first predict an image that the HVS perceives from a distorted image based on the free energy theory, then the second group of features is composed of some HVS-inspired features (such as structural information and gradient magnitude) computed using the distorted and predicted images. The third group of features quantifies the possible losses of “naturalness” in the distorted image by fitting the generalized Gaussian distribution to mean subtracted contrast normalized coefficients. After feature extraction, our algorithm utilizes the support vector machine based regression module to derive the overall quality score. Experiments on LIVE, TID2008, CSIQ, IVC, and Toyama databases confirm the effectiveness of our introduced NR IQA metric compared to the state-of-the-art.",
"title": ""
},
{
"docid": "f6df414f8f61dbdab32be2f05d921cb8",
"text": "The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods, whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.",
"title": ""
},
{
"docid": "7a8babef15e8ed12d44dab68b5e17f6d",
"text": "Hands-free terminals for speech communication employ adaptive filters to reduce echoes resulting from the acoustic coupling between loudspeaker and microphone. When using a personal computer with commercial audio hardware for teleconferencing, a sampling frequency offset between the loudspeaker output D/A converter and the microphone input A/D converter often occurs. In this case, state-of-the-art echo cancellation algorithms fail to track the correct room impulse response. In this paper, we present a novel least mean square (LMS-type) adaptive algorithm to estimate the frequency offset and resynchronize the signals using arbitrary sampling rate conversion. In conjunction with a normalized LMS-type adaptive filter for room impulse response tracking, the proposed system widely removes the deteriorating effects of a frequency offset up to several Hz and restores the functionality of echo cancellation.",
"title": ""
},
{
"docid": "f6f014f88f0958db650c7d21f06813e1",
"text": "Nowadays, huge amount of data and information are available for everyone, Data can now be stored in many different kinds of databases and information repositories, besides being available on the Internet or in printed form. With such amount of data, there is a need for powerful techniques for better interpretation of these data that exceeds the human's ability for comprehension and making decision in a better way. In order to reveal the best tools for dealing with the classification task that helps in decision making, this paper has conducted a comparative study between a number of some of the free available data mining and knowledge discovery tools and software packages. Results have showed that the performance of the tools for the classification task is affected by the kind of dataset used and by the way the classification algorithms were implemented within the toolkits. For the applicability issue, the WEKA toolkit has achieved the highest applicability followed by Orange, Tanagra, and KNIME respectively. Finally; WEKA toolkit has achieved the highest improvement in classification performance; when moving from the percentage split test mode to the Cross Validation test mode, followed by Orange, KNIME and finally Tanagra respectively. Keywords-component; data mining tools; data classification; Wekak; Orange; Tanagra; KNIME.",
"title": ""
},
{
"docid": "b6a600ea1c277bc3bf8f2452b8aef3f1",
"text": "Fusion of data from multiple sensors can enable robust navigation in varied environments. However, for optimal performance, the sensors must calibrated relative to one another. Full sensor-to-sensor calibration is a spatiotemporal problem: we require an accurate estimate of the relative timing of measurements for each pair of sensors, in addition to the 6-DOF sensor-to-sensor transform. In this paper, we examine the problem of determining the time delays between multiple proprioceptive and exteroceptive sensor data streams. The primary difficultly is that the correspondences between measurements from different sensors are unknown, and hence the delays cannot be computed directly. We instead formulate temporal calibration as a registration task. Our algorithm operates by aligning curves in a three-dimensional orientation space, and, as such, can be considered as a variant of Iterative Closest Point (ICP). We present results from simulation studies and from experiments with a PR2 robot, which demonstrate accurate calibration of the time delays between measurements from multiple, heterogeneous sensors.",
"title": ""
},
{
"docid": "e88f19cdd7f21c5aafedc13143bae00f",
"text": "For a long time, the term virtualization implied talking about hypervisor-based virtualization. However, in the past few years container-based virtualization got mature and especially Docker gained a lot of attention. Hypervisor-based virtualization provides strong isolation of a complete operating system whereas container-based virtualization strives to isolate processes from other processes at little resource costs. In this paper, hypervisor and container-based virtualization are differentiated and the mechanisms behind Docker and LXC are described. The way from a simple chroot over a container framework to a ready to use container management solution is shown and a look on the security of containers in general is taken. This paper gives an overview of the two different virtualization approaches and their advantages and disadvantages.",
"title": ""
}
] |
scidocsrr
|
c1ff619677937073bb73ea57e592d8e5
|
Policy-Based Adaptation of Byzantine Fault Tolerant Systems
|
[
{
"docid": "cbad7caa1cc1362e8cd26034617c39f4",
"text": "Many state-machine Byzantine Fault Tolerant (BFT) protocols have been introduced so far. Each protocol addressed a different subset of conditions and use-cases. However, if the underlying conditions of a service span different subsets, choosing a single protocol will likely not be a best fit. This yields robustness and performance issues which may be even worse in services that exhibit fluctuating conditions and workloads. In this paper, we reconcile existing state-machine BFT protocols in a single adaptive BFT system, called ADAPT, aiming at covering a larger set of conditions and use-cases, probably the union of individual subsets of these protocols. At anytime, a launched protocol in ADAPT can be aborted and replaced by another protocol according to a potential change (an event) in the underlying system conditions. The launched protocol is chosen according to an \"evaluation process\" that takes into consideration both: protocol characteristics and its performance. This is achieved by applying some mathematical formulas that match the profiles of protocols to given user (e.g., service owner) preferences. ADAPT can assess the profiles of protocols (e.g., throughput) at run-time using Machine Learning prediction mechanisms to get accurate evaluations. We compare ADAPT with well known BFT protocols showing that it outperforms others as system conditions change and under dynamic workloads.",
"title": ""
}
] |
[
{
"docid": "fac476744429cacfe1c07ec19ee295eb",
"text": "One effort to protect the network from the threats of hackers, crackers and security experts is to build the Intrusion Detection System (IDS) on the network. The problem arises when new attacks emerge in a relatively fast, so a network administrator must create their own signature and keep updated on new types of attacks that appear. In this paper, it will be made an Intelligence Intrusion Detection System (IIDS) where the Hierarchical Clustering algorithm as an artificial intelligence is used as pattern recognition and implemented on the Snort IDS. Hierarchical clustering applied to the training data to determine the number of desired clusters. Labeling cluster is then performed; there are three labels of cluster, namely Normal, High Risk and Critical. Centroid Linkage Method used for the test data of new attacks. Output system is used to update the Snort rule database. This research is expected to help the Network Administrator to monitor and learn some new types of attacks. From the result, this system is already quite good to recognize certain types of attacks like exploit, buffer overflow, DoS and IP Spoofing. Accuracy performance of this system for the mentioned above type of attacks above is 90%.",
"title": ""
},
{
"docid": "1a38f4218ab54ff22c776eb5572409bf",
"text": "Deep learning has achieved significant improvement in various machine learning tasks including image recognition, speech recognition, machine translation and etc. Inspired by the huge success of the paradigm, there have been lots of tries to apply deep learning algorithms to data analytics problems with big data including traffic flow prediction. However, there has been no attempt to apply the deep learning algorithms to the analysis of air traffic data. This paper investigates the effectiveness of the deep learning models in the air traffic delay prediction tasks. By combining multiple models based on the deep learning paradigm, an accurate and robust prediction model has been built which enables an elaborate analysis of the patterns in air traffic delays. In particular, Recurrent Neural Networks (RNN) has shown its great accuracy in modeling sequential data. Day-to-day sequences of the departure and arrival flight delays of an individual airport have been modeled by the Long Short-Term Memory RNN architecture. It has been shown that the accuracy of RNN improves with deeper architectures. In this study, four different ways of building deep RNN architecture are also discussed. Finally, the accuracy of the proposed prediction model was measured, analyzed and compared with previous prediction methods. It shows best accuracy compared with all other methods.",
"title": ""
},
{
"docid": "4c1060bf3e7d01f817e6ce84d1d6fac0",
"text": "1668 The smaller the volume (or share) of imports from the trading partner, the larger the impact of a preferential trade agreement on home country welfare—because the smaller the imports, the smaller the loss in tariff revenue. And the home country is better off as a small member of a large bloc than as a large member of a small bloc. Summary findings There has been a resurgence of preferential trade agreements (PTAs) partly because of the deeper European integration known as EC-92, which led to a fear of a Fortress Europe; and partly because of the U.S. decision to form a PTA with Canada. As a result, there has been a domino effect: a proliferation of PTAs, which has led to renewed debate about how PTAs affect both welfare and the multilateral system. Schiff examines two issues: the welfare impact of preferential trade agreements (PTAs) and the effect of structural and policy changes on PTAs. He asks how the PTA's effect on home-country welfare is affected by higher demand for imports; the efficiency of production of the partner or rest of the world (ROW); the share imported from the partner (ROW); and the initial protection on imports from the partner (ROW). Among his findings: • An individual country benefits more from a PTA if it imports less from its partner countries (with imports measured either in volume or as a share of total imports). This result has important implications for choice of partners. • A small home country loses from forming a free trade agreement (FTA) with a small partner country but gains from forming one with the rest of the world. In other words, the home country is better off as a small member of a large bloc than as a large member of a small bloc. This result need not hold if smuggling is a factor. • Home country welfare after formation of a FTA is higher when imports from the partner country are smaller, whether the partner country is large or small. Welfare worsens as imports from the partner country increase. • In general, a PTA is more beneficial (or less harmful) for a country with lower import demand. A PTA is also more beneficial for a country with a more efficient import-substituting sector, as this will result in a lower demand for imports. • A small country may gain from forming a PTA when smuggling …",
"title": ""
},
{
"docid": "209b304009db4a04400da178d19fe63e",
"text": "Mecanum wheels give vehicles and robots autonomous omni-directional capabilities, while regular wheels don’t. The omni-directionality that such wheels provide makes the vehicle extremely maneuverable, which could be very helpful in different indoor and outdoor applications. However, current Mecanum wheel designs can only operate on flat hard surfaces, and perform very poorly on rough terrains. This paper presents two modified Mecanum wheel designs targeted for complex rough terrains and discusses their advantages and disadvantages in comparison to regular Mecanum wheels. The wheels proposed here are particularly advantageous for overcoming obstacles up to 75% of the overall wheel diameter in lateral motion which significantly facilitates the lateral motion of vehicles on hard rough surfaces and soft soils such as sand which cannot be achieved using other types of wheels. The paper also presents control aspects that need to be considered when controlling autonomous vehicles/robots using the proposed wheels.",
"title": ""
},
{
"docid": "148d0709c58111c2f703f68d348c09af",
"text": "There has been tremendous growth in the use of mobile devices over the last few years. This growth has fueled the development of millions of software applications for these mobile devices often called as 'apps'. Current estimates indicate that there are hundreds of thousands of mobile app developers. As a result, in recent years, there has been an increasing amount of software engineering research conducted on mobile apps to help such mobile app developers. In this paper, we discuss current and future research trends within the framework of the various stages in the software development life-cycle: requirements (including non-functional), design and development, testing, and maintenance. While there are several non-functional requirements, we focus on the topics of energy and security in our paper, since mobile apps are not necessarily built by large companies that can afford to get experts for solving these two topics. For the same reason we also discuss the monetizing aspects of a mobile app at the end of the paper. For each topic of interest, we first present the recent advances done in these stages and then we present the challenges present in current work, followed by the future opportunities and the risks present in pursuing such research.",
"title": ""
},
{
"docid": "44bffd6caa0d90798f8ebc21a10fd248",
"text": "INTRODUCTION\nThis study describes quality indicators for the pre-analytical process, grouping errors according to patient risk as critical or major, and assesses their evaluation over a five-year period.\n\n\nMATERIALS AND METHODS\nA descriptive study was made of the temporal evolution of quality indicators, with a study population of 751,441 analytical requests made during the period 2007-2011. The Runs Test for randomness was calculated to assess changes in the trend of the series, and the degree of control over the process was estimated by the Six Sigma scale.\n\n\nRESULTS\nThe overall rate of critical pre-analytical errors was 0.047%, with a Six Sigma value of 4.9. The total rate of sampling errors in the study period was 13.54% (P = 0.003). The highest rates were found for the indicators \"haemolysed sample\" (8.76%), \"urine sample not submitted\" (1.66%) and \"clotted sample\" (1.41%), with Six Sigma values of 3.7, 3.7 and 2.9, respectively.\n\n\nCONCLUSION\nThe magnitude of pre-analytical errors was accurately valued. While processes that triggered critical errors are well controlled, the results obtained for those regarding specimen collection are borderline unacceptable; this is particularly so for the indicator \"haemolysed sample\".",
"title": ""
},
{
"docid": "86cdce8b04818cc07e1003d85305bd40",
"text": "Balanced graph partitioning is a well known NP-complete problem with a wide range of applications. These applications include many large-scale distributed problems including the optimal storage of large sets of graph-structured data over several hosts-a key problem in today's Cloud infrastructure. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because they typically involve frequent global operations over the entire graph. In this paper, we propose a fully distributed algorithm, called JA-BE-JA, that uses local search and simulated annealing techniques for graph partitioning. The algorithm is massively parallel: there is no central coordination, each node is processed independently, and only the direct neighbors of the node, and a small subset of random nodes in the graph need to be known locally. Strict synchronization is not required. These features allow JA-BE-JA to be easily adapted to any distributed graph-processing system from data centers to fully distributed networks. We perform a thorough experimental analysis, which shows that the minimal edge-cut value achieved by JA-BE-JA is comparable to state-of-the-art centralized algorithms such as METIS. In particular, on large social networks JA-BEJA outperforms METIS, which makes JA-BE-JA-a bottom-up, self-organizing algorithm-a highly competitive practical solution for graph partitioning.",
"title": ""
},
{
"docid": "82159d19fc5a5ac7242c7e60d75e1f09",
"text": "in the domain of computer technologies, our understanding of knowledge is somewhat more elusive. Data and information can be exactly the same thing, since their distinction lies in the context to which they are applied. One person's data could be another person's information. Although data often refers to the raw codification of facts, usually useful to few, it could be information for someone who could apply it to a decision or problem context. Typically, data is classified, summarized, transferred, or corrected to add value, and becomes information within a certain context. This conversion is relatively mechanical, and it has long been facilitated by storage, processing, and communication technologies. These technologies add place, time, and form utility to the data. In doing so, the information serves to \"inform\" or reduce uncertainty within the problem domain. Therefore, information is dyadic within the attendant condition, i.e., it has only utility within the context. Data, on the other hand, is not dyadic within the context. Independent of context, data and information could be identical.",
"title": ""
},
{
"docid": "dd51e9bed7bbd681657e8742bb5bf280",
"text": "Automated negotiation systems with self interested agents are becoming increas ingly important One reason for this is the technology push of a growing standardized communication infrastructure Internet WWW NII EDI KQML FIPA Concor dia Voyager Odyssey Telescript Java etc over which separately designed agents belonging to di erent organizations can interact in an open environment in real time and safely carry out transactions The second reason is strong application pull for computer support for negotiation at the operative decision making level For example we are witnessing the advent of small transaction electronic commerce on the Internet for purchasing goods information and communication bandwidth There is also an industrial trend toward virtual enterprises dynamic alliances of small agile enterprises which together can take advantage of economies of scale when available e g respond to more diverse orders than individual agents can but do not su er from diseconomies of scale Multiagent technology facilitates such negotiation at the operative decision mak ing level This automation can save labor time of human negotiators but in addi tion other savings are possible because computational agents can be more e ective at nding bene cial short term contracts than humans are in strategically and com binatorially complex settings This chapter discusses multiagent negotiation in situations where agents may have di erent goals and each agent is trying to maximize its own good without concern for the global good Such self interest naturally prevails in negotiations among independent businesses or individuals In building computer support for negotiation in such settings the issue of self interest has to be dealt with In cooperative distributed problem solving the system designer imposes an interaction protocol and a strategy a mapping from state history to action a",
"title": ""
},
{
"docid": "e294a94b03a2bd958def360a7bce2a46",
"text": "The seismic loss estimation is greatly influenced by the identification of the failure mechanism and distribution of the structures. In case of infilled structures, the final failure mechanism greatly differs to that expected during the design and the analysis stages. This is mainly due to the resultant composite behaviour of the frame and the infill panel, which makes the failure assessment and consequently the loss estimation a challenge. In this study, a numerical investigation has been conducted on the influence of masonry infilled panels on physical structural damages and the associated economic losses, under seismic excitation. The selected index buildings have been simulated following real case typical mid-rise masonry infilled steel frame structures. A realistic simulation of construction details, such as variation of infill material properties, type of connections and built quality have been implemented in the models. The fragility functions have been derived for each model using the outcomes obtained from incremental dynamic analysis (IDA). Moreover, by considering different cases of building distribution, the losses have been estimated following an intensity-based assessment approach. The results indicate that the presence of infill panel have a noticeable influence on the vulnerability of the structure and should not be ignored in loss estimations.",
"title": ""
},
{
"docid": "8961d0bd4ba45849bd8fa5c53c0cfb1d",
"text": "SUMMARY\nThe program MODELTEST uses log likelihood scores to establish the model of DNA evolution that best fits the data.\n\n\nAVAILABILITY\nThe MODELTEST package, including the source code and some documentation is available at http://bioag.byu. edu/zoology/crandall_lab/modeltest.html.",
"title": ""
},
{
"docid": "509f840b001b01825425db6209cb7c82",
"text": "A system of rigid bodies with multiple simultaneous unilateral contacts is considered in this paper. The problem is to predict the velocities of the bodies and the frictional forces acting on the simultaneous multicontacts. This paper presents a numerical method based on an extension of an explicit time-stepping scheme and an application of the differential inclusion process introduced by J. J. Moreau. From a differential kinematic analysis of contacts, we derive a set of transfer equations in the velocity-based time-stepping formulation. In applying the Gauss-Seidel iterative scheme, the transfer equations are combined with the Signorini conditions and Coulomb's friction law. The contact forces are properly resolved in each iteration, without resorting to any linearization of the friction cone. The proposed numerical method is illustrated with examples, and its performance is compared with an acceleration-based scheme using linear complementary techniques. Multibody contact systems are broadly involved in many engineering applications. The motivation of this is to solve for the contact forces and body motion for planning the fixture-inserting operation. However, the results of the paper can be generally used in problems involving multibody contacts, such as robotic manipulation, mobile robots, computer graphics and simulation, etc. The paper presents a numerical method based on an extension of an explicit time-stepping scheme, and an application of the differential inclusion process introduced by J. J. Moreau, and compares the numerical results with an acceleration-based scheme with linear complementary techniques. We first describe the mathematical model of contact kinematics of smooth rigid bodies. Then, we present the Gauss-Seidel iterative method for resolving the multiple simultaneous contacts within the time-stepping framework. Finally, numerical examples are given and compared with the previous results of a different approach, which shows that the simulation results of these two methods agree well, and it is also generally more efficient, as it is an explicit method. This paper focuses on the description of the proposed time-stepping and Gauss-Seidel iterations and their numerical implementation, and several theoretical issues are yet to be resolved, like the convergence and uniqueness of the Gauss-Seidel iteration, and the existence and uniqueness of a positive k in solving frictional forces. However, our limited numerical experience has indicated positive answers to these questions. We have always found a single positive root of k and a convergent solution in the Gauss-Seidel iteration for all of our examples.",
"title": ""
},
{
"docid": "f784ffcdb63558f5f22fe90058853904",
"text": "Stylometric analysis of prose is typically limited to classification tasks such as authorship attribution. Since the models used are typically black boxes, they give little insight into the stylistic differences they detect. In this paper, we characterize two prose genres syntactically: chick lit (humorous novels on the challenges of being a modern-day urban female) and high literature. First, we develop a top-down computational method based on existing literary-linguistic theory. Using an off-the-shelf parser we obtain syntactic structures for a Dutch corpus of novels and measure the distribution of sentence types in chick-lit and literary novels. The results show that literature contains more complex (subordinating) sentences than chick lit. Secondly, a bottom-up analysis is made of specific morphological and syntactic features in both genres, based on the parser’s output. This shows that the two genres can be distinguished along certain features. Our results indicate that detailed insight into stylistic differences can be obtained by combining computational linguistic analysis with literary theory.",
"title": ""
},
{
"docid": "4a4a11d2779eab866ff32c564e54b69d",
"text": "Although backpropagation neural networks generally predict better than decision trees do for pattern classiication problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, more often than not, explicit knowledge is needed by human experts. This work drives a symbolic representation for neural networks to make explicit each prediction of a neural network. An algorithm is proposed and implemented to extract symbolic rules from neural networks. Explicitness of the extracted rules is supported by comparing the symbolic rules generated by decision trees methods. Empirical study demonstrates that the proposed algorithm generates high quality rules from neural networks comparable with those of decision trees in terms of predictive accuracy, number of rules and average number of conditions for a rule. The symbolic rules from nerual networks preserve high predictive accuracy of original networks. An early and shorter version of this paper has been accepted for presentation at IJCAI'95.",
"title": ""
},
{
"docid": "bfc12c790b5195861ba74f024d7cc9b5",
"text": "Research in emotion regulation has largely focused on how people manage their own emotions, but there is a growing recognition that the ways in which we regulate the emotions of others also are important. Drawing on work from diverse disciplines, we propose an integrative model of the psychological and neural processes supporting the social regulation of emotion. This organizing framework, the 'social regulatory cycle', specifies at multiple levels of description the act of regulating another person's emotions as well as the experience of being a target of regulation. The cycle describes the processing stages that lead regulators to attempt to change the emotions of a target person, the impact of regulation on the processes that generate emotions in the target, and the underlying neural systems.",
"title": ""
},
{
"docid": "95d767d1b9a2ba2aecdf26443b3dd4af",
"text": "Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.",
"title": ""
},
{
"docid": "971147ec0ca3210b834da65e563120d0",
"text": "The production of adenosine represents a critical endogenous mechanism for regulating immune and inflammatory responses during conditions of stress, injury, or infection. Adenosine exerts predominantly protective effects through activation of four 7-transmembrane receptor subtypes termed A1, A2A, A2B, and A3, of which the A2A adenosine receptor (A2AAR) is recognised as a major mediator of anti-inflammatory responses. The A2AAR is widely expressed on cells of the immune system and numerous in vitro studies have identified its role in suppressing key stages of the inflammatory process, including leukocyte recruitment, phagocytosis, cytokine production, and immune cell proliferation. The majority of actions produced by A2AAR activation appear to be mediated by cAMP, but downstream events have not yet been well characterised. In this article, we review the current evidence for the anti-inflammatory effects of the A2AAR in different cell types and discuss possible molecular mechanisms mediating these effects, including the potential for generalised suppression of inflammatory gene expression through inhibition of the NF-kB and JAK/STAT proinflammatory signalling pathways. We also evaluate findings from in vivo studies investigating the role of the A2AAR in different tissues in animal models of inflammatory disease and briefly discuss the potential for development of selective A2AAR agonists for use in the clinic to treat specific inflammatory conditions.",
"title": ""
},
{
"docid": "bdc82fead985055041171d63415f9dde",
"text": "We introduce a new corpus of sentence-level agreement and disagreement annotations over LiveJournal and Wikipedia threads. This is the first agreement corpus to offer full-document annotations for threaded discussions. We provide a methodology for coding responses as well as an implemented tool with an interface that facilitates annotation of a specific response while viewing the full context of the thread. Both the results of an annotator questionnaire and high inter-annotator agreement statistics indicate that the annotations collected are of high quality.",
"title": ""
},
{
"docid": "0b9b85dc4f80e087f591f89b12bb6146",
"text": "Entity profiling (EP) as an important task of Web mining and information extraction (IE) is the process of extracting entities in question and their related information from given text resources. From computational viewpoint, the Farsi language is one of the less-studied and less-resourced languages, and suffers from the lack of high quality language processing tools. This problem emphasizes the necessity of developing Farsi text processing systems. As an element of EP research, we present a semantic approach to extract profile of person entities from Farsi Web documents. Our approach includes three major components: (i) pre-processing, (ii) semantic analysis and (iii) attribute extraction. First, our system takes as input the raw text, and annotates the text using existing pre-processing tools. In semantic analysis stage, we analyze the pre-processed text syntactically and semantically and enrich the local processed information with semantic information obtained from a distant knowledge base. We then use a semantic rule-based approach to extract the related information of the persons in question. We show the effectiveness of our approach by testing it on a small Farsi corpus. The experimental results are encouraging and show that the proposed method outperforms baseline methods.",
"title": ""
}
] |
scidocsrr
|
14d11227c990c49308552e01212dc9c3
|
Humans prefer curved visual objects.
|
[
{
"docid": "5afe5504566e60cbbb50f83501eee06c",
"text": "This paper explores theoretical issues in ergonomics related to semantics and the emotional content of design. The aim is to find answers to the following questions: how to design products triggering \"happiness\" in one's mind; which product attributes help in the communication of positive emotions; and finally, how to evoke such emotions through a product. In other words, this is an investigation of the \"meaning\" that could be designed into a product in order to \"communicate\" with the user at an emotional level. A literature survey of recent design trends, based on selected examples of product designs and semantic applications to design, including the results of recent design awards, was carried out in order to determine the common attributes of their design language. A review of Good Design Award winning products that are said to convey and/or evoke emotions in the users has been done in order to define good design criteria. These criteria have been discussed in relation to user emotional responses and a selection of these has been given as examples.",
"title": ""
}
] |
[
{
"docid": "64e0a1345e5a181191c54f6f9524c96d",
"text": "Social media based brand communities are communities initiated on the platform of social media. In this article, we explore whether brand communities based on social media (a special type of online brand communities) have positive effects on the main community elements and value creation practices in the communities as well as on brand trust and brand loyalty. A survey based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on community markers (i.e., shared consciousness, shared rituals and traditions, and obligations to society), which have positive effects on value creation practices (i.e., social networking, community engagement, impressions management, and brand use). Such communities could enhance brand loyalty through brand use and impression management practices. We show that brand trust has a full mediating role in converting value creation practices into brand loyalty. Implications for practice and future research opportunities are discussed.",
"title": ""
},
{
"docid": "c2558388fb20454fa6f4653b1e4ab676",
"text": "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.",
"title": ""
},
{
"docid": "2a78461c1949b0cf6b119ae99c08847f",
"text": "Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the handdesigned extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github. io/large-scale-curiosity/.",
"title": ""
},
{
"docid": "bf14fb39f07e01bd6dc01b3583a726b6",
"text": "To provide a general context for library implementations of open source software (OSS), the purpose of this paper is to assess and evaluate the awareness and adoption of OSS by the LIS professionals working in various engineering colleges of Odisha. The study is based on survey method and questionnaire technique was used for collection data from the respondents. The study finds that although the LIS professionals of engineering colleges of Odisha have knowledge on OSS, their uses in libraries are in budding stage. Suggests that for the widespread use of OSS in engineering college libraries of Odisha, a cooperative and participatory organisational system, positive attitude of authorities and LIS professionals, proper training provision for LIS professionals need to be developed.",
"title": ""
},
{
"docid": "14838947ee3b95c24daba5a293067730",
"text": "In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.",
"title": ""
},
{
"docid": "f08c6829b353c45b6a9a6473b4f9a201",
"text": "In this paper, we study the Symmetric Regularized Long Wave (SRLW) equations by finite difference method. We design some numerical schemes which preserve the original conservative properties for the equations. The first scheme is two-level and nonlinear-implicit. Existence of its difference solutions are proved by Brouwer fixed point theorem. It is proved by the discrete energy method that the scheme is uniquely solvable, unconditionally stable and second-order convergent for U in L1 norm, and for N in L2 norm on the basis of the priori estimates. The second scheme is three-level and linear-implicit. Its stability and second-order convergence are proved. Both of the two schemes are conservative so can be used for long time computation. However, they are coupled in computing so need more CPU time. Thus we propose another three-level linear scheme which is not only conservative but also uncoupled in computation, and give the numerical analysis on it. Numerical experiments demonstrate that the schemes are accurate and efficient. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "af07a7f4ffe29dda52bca62a803272fe",
"text": "OBJECTIVE\nTo evaluate the effectiveness and tolerance of intraarticular injection (IAI) of triamcinolone hexacetonide (TH) for the treatment of osteoarthritis (OA) of hand interphalangeal (IP) joints.\n\n\nMETHODS\nSixty patients who underwent IAI at the most symptomatic IP joint were randomly assigned to receive TH/lidocaine (LD; n = 30) with TH 20 mg/ml and LD 2%, or just LD (n = 30). The injected joint was immobilized with a splint for 48 h in both groups. Patients were assessed at baseline and at 1, 4, 8, and 12 weeks by a blinded observer. The following variables were assessed: pain at rest [visual analog scale (VAS)r], pain at movement (VASm), swelling (physician VASs), goniometry, grip and pinch strength, hand function, treatment improvement, daily requirement of paracetamol, and local adverse effects. The proposed treatment (IAI with TH/LD) was successful if statistical improvement (p < 0.05) was achieved in at least 2 of 3 VAS. Repeated-measures ANOVA test was used to analyze intervention response.\n\n\nRESULTS\nFifty-eight patients (96.67%) were women, and the mean age was 60.7 years (± 8.2). The TH/LD group showed greater improvement than the LD group for VASm (p = 0.014) and physician VASs (p = 0.022) from the first week until the end of the study. In other variables, there was no statistical difference between groups. No significant adverse effects were observed.\n\n\nCONCLUSION\nThe IAI with TH/LD has been shown to be more effective than the IAI with LD for pain on movement and joint swelling in patients with OA of the IP joints. Regarding pain at rest, there was no difference between groups.\n\n\nTRIAL REGISTRATION NUMBER\nClinicalTrials.gov (NCT02102620).",
"title": ""
},
{
"docid": "e583cf382c9a58a6f09acfcb345a381f",
"text": "DXC Technology were asked to participate in a Cyber Vulnerability Investigation into organizations in the Defense sector in the UK. Part of this work was to examine the influence of socio-technical and/or human factors on cyber security – where possible linking factors to specific technical risks. Initial research into the area showed that (commercially, at least) most approaches to developing security culture in organisations focus on end users and deal solely with training and awareness regarding identifying and avoiding social engineering attacks and following security procedures. The only question asked and answered is how to ensure individuals conform to security policy and avoid such attacks. But experience of recent attacks (e.g., Wannacry, Sony hacks) show that responses to cyber security requirements are not just determined by the end users’ level of training and awareness, but grow out of the wider organizational culture – with failures at different levels of the organization. This is a known feature of socio-technical research. As a result, we have sought to develop and apply a different approach to measuring security culture, based on discovering the distribution of beliefs and values (and resulting patterns of behavior) throughout the organization. Based on our experience, we show a way we can investigate these patterns of behavior and use them to identify socio-technical vulnerabilities by comparing current and ‘ideal’ behaviors. In doing so, we also discuss how this approach can be further developed and successfully incorporated into commercial practice, while retaining scientific validity.",
"title": ""
},
{
"docid": "b50b43bcc69f840e4ba4e26529788cab",
"text": "Recent region-based object detectors are usually built with separate classification and localization branches on top of shared feature extraction networks. In this paper, we analyze failure cases of state-ofthe-art detectors and observe that most hard false positives result from classification instead of localization. We conjecture that: (1) Shared feature representation is not optimal due to the mismatched goals of feature learning for classification and localization; (2) multi-task learning helps, yet optimization of the multi-task loss may result in sub-optimal for individual tasks; (3) large receptive field for different scales leads to redundant context information for small objects. We demonstrate the potential of detector classification power by a simple, effective, and widely-applicable Decoupled Classification Refinement (DCR) network. DCR samples hard false positives from the base classifier in Faster RCNN and trains a RCNN-styled strong classifier. Experiments show new stateof-the-art results on PASCAL VOC and COCO without any bells and whistles.",
"title": ""
},
{
"docid": "fb1f3f300bcd48d99f0a553a709fdc89",
"text": "This work includes a high step up voltage gain DC-DC converter for DC microgrid applications. The DC microgrid can be utilized for rural electrification, UPS support, Electronic lighting systems and Electrical vehicles. The whole system consists of a Photovoltaic panel (PV), High step up DC-DC converter with Maximum Power Point Tracking (MPPT) and DC microgrid. The entire system is optimized with both MPPT and converter separately. The MPP can be tracked by Incremental Conductance (IC) MPPT technique modified with D-Sweep (Duty ratio Sweep). D-sweep technique reduces the problem of multiple local maxima. Converter optimization includes a high step up DC-DC converter which comprises of both coupled inductor and switched capacitors. This increases the gain up to twenty times with high efficiency. Both converter optimization and MPPT optimization increases overall system efficiency. MATLAB/simulink model is implemented. Hardware of the system can be implemented by either voltage mode control or current mode control.",
"title": ""
},
{
"docid": "c043e7a5d5120f5a06ef6decc06c184a",
"text": "Entities are further categorized into those that are the object of the measurement (‘assayed components’) and those, if any, that are subjected to targeted and controlled experimental interventions (‘perturbations/interventions’). These two core categories are related to the concepts ‘perturbagen’ and ‘target’ in the Bioassay Ontology (BAO2) and capture an important aspect of the design of experiments where multiple conditions are compared with each other in order to test whether a given perturbation (e.g., the presence or absence of a drug), causes a given response (e.g., a change in gene expression). Additional categories include ‘experimental variables’, ‘reporters’, ‘normalizing components’ and generic ‘biological components’ (Supplementary Data). We developed a web-based tool with a graphical user interface that allows computer-assisted manual extraction of the metadata model described above at the level of individual figure panels based on the information provided in figure legends and in the images. Files that contain raw or minimally processed data, when available, can furthermore be linked or uploaded and attached to the figure. As proof of principle, we have curated a compendium of over 18,000 experiments published across 23 journals. From the 721 papers processed, 381 papers were related to the field of autophagy, and the rest were annotated during the publication process of accepted manuscripts at four partner molecular biology journals. Both sets of papers were processed identically. Out of the 18,157 experimental panels annotated, 77% included at least one ‘intervention/assayed component’ pair, and this supported the broad applicability of the perturbation-centric SourceData model. We provide a breakdown of entities by categories in Supplementary Figure 1. We note that the presence of a perturbation is not a requirement for the model. As such, the SourceData model is also applicable in cases such as correlative observations. The SourceData model is independent of data type (i.e., image-based or numerical values) and is well suited for cell and molecular biology experiments. 77% of the processed entities were explicitly mentioned in the text of the legend. For the remaining entities, curators added the terms based on the labels directly displayed on the image of the figure. SourceData: a semantic platform for curating and searching figures",
"title": ""
},
{
"docid": "5a573ae9fad163c6dfe225f59b246b7f",
"text": "The sharp increase of plastic wastes results in great social and environmental pressures, and recycling, as an effective way currently available to reduce the negative impacts of plastic wastes, represents one of the most dynamic areas in the plastics industry today. Froth flotation is a promising method to solve the key problem of recycling process, namely separation of plastic mixtures. This review surveys recent literature on plastics flotation, focusing on specific features compared to ores flotation, strategies, methods and principles, flotation equipments, and current challenges. In terms of separation methods, plastics flotation is divided into gamma flotation, adsorption of reagents, surface modification and physical regulation.",
"title": ""
},
{
"docid": "c7862136579a8340f22db5d6f3ee5f12",
"text": "A novel lighting system was devised for 3D defect inspection in the wire bonding process. Gold wires of 20 microm in diameter were bonded to connect the integrated circuit (IC) chip with the substrate. Bonding wire defects can be classified as 2D type and 3D type. The 2D-type defects include missed, shifted, or shorted wires. These defects can be inspected from a 2D top-view image of the wire. The 3D-type bonding wire defects are sagging wires, and are difficult to inspect from a 2D top-view image. A structured lighting system was designed and developed to facilitate all 2D-type and 3D-type defect inspection. The devised lighting system can be programmed to turn the structured LEDs on or off independently. Experiments show that the devised illumination system is effective for wire bonding inspection and will be valuable for further applications.",
"title": ""
},
{
"docid": "1ef2e54d021f9d149600f0bc7bebb0cd",
"text": "The field of open-domain conversation generation using deep neural networks has attracted increasing attention from researchers for several years. However, traditional neural language models tend to generate safe, generic reply with poor logic and no emotion. In this paper, an emotional conversation generation orientated syntactically constrained bidirectional-asynchronous framework called E-SCBA is proposed to generate meaningful (logical and emotional) reply. In E-SCBA, pre-generated emotion keyword and topic keyword are asynchronously introduced into the reply during the generation, and the process of decoding is much different from the most existing methods that generates reply from the first word to the end. A newly designed bidirectional-asynchronous decoder with the multi-stage strategy is proposed to support this idea, which ensures the fluency and grammaticality of reply by making full use of syntactic constraint. Through the experiments, the results show that our framework not only improves the diversity of replies, but gains a boost on both logic and emotion compared with baselines as well.",
"title": ""
},
{
"docid": "64ddf475e5fcf7407e4dfd65f95a68a8",
"text": "Fuzzy PID controllers have been developed and applied to many fields for over a period of 30 years. However, there is no systematic method to design membership functions (MFs) for inputs and outputs of a fuzzy system. Then optimizing the MFs is considered as a system identification problem for a nonlinear dynamic system which makes control challenges. This paper presents a novel online method using a robust extended Kalman filter to optimize a Mamdani fuzzy PID controller. The robust extended Kalman filter (REKF) is used to adjust the controller parameters automatically during the operation process of any system applying the controller to minimize the control error. The fuzzy PID controller is tuned about the shape of MFs and rules to adapt with the working conditions and the control performance is improved significantly. The proposed method in this research is verified by its application to the force control problem of an electro-hydraulic actuator. Simulations and experimental results show that proposed method is effective for the online optimization of the fuzzy PID controller. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "74aaf19d143d86b52c09e726a70a2ac0",
"text": "This paper presents simulation and experimental investigation results of steerable integrated lens antennas (ILAs) operating in the 60 GHz frequency band. The feed array of the ILAs is comprised by four switched aperture coupled microstrip antenna (ACMA) elements that allows steering between four different antenna main beam directions in one plane. The dielectric lenses of the designed ILAs are extended hemispherical quartz (ε = 3.8) lenses with the radiuses of 7.5 and 12.5 mm. The extension lengths of the lenses are selected through the electromagnetic optimization in order to achieve the maximum ILAs directivities and also the minimum directivity degradations of the outer antenna elements in the feed array (± 3 mm displacement) relatively to the inner ones (± 1 mm displacement). Simulated maximum directivities of the boresight beam of the designed ILAs are 19.8 dBi and 23.8 dBi that are sufficient for the steerable antennas for the millimeter-wave WLAN/WPAN communication systems. The feed ACMA array together with the waveguide to microstrip transition dedicated for experimental investigations is fabricated on high frequency and low cost Rogers 4003C substrate. Single Pole Double Through (SPDT) switches from Hittite are used in order to steer the ILA prototypes main beam directions. The experimental results of the fabricated electronically steerable quartz ILA prototypes prove the simulation results and show ±35° and ±22° angle sector coverage for the lenses with the 7.5 and 12.5 mm radiuses respectively.",
"title": ""
},
{
"docid": "e9358f48172423a421ef5edf6fe909f9",
"text": "PURPOSE\nTo describe a modification of the computer self efficacy scale for use in clinical settings and to report on the modified scale's reliability and construct validity.\n\n\nMETHODS\nThe computer self efficacy scale was modified to make it applicable for clinical settings (for use with older people or people with disabilities using everyday technologies). The modified scale was piloted, then tested with patients in an Australian inpatient rehabilitation setting (n = 88) to determine the internal consistency using Cronbach's alpha coefficient. Construct validity was assessed by correlation of the scale with age and technology use. Factor analysis using principal components analysis was undertaken to identify important constructs within the scale.\n\n\nRESULTS\nThe modified computer self efficacy scale demonstrated high internal consistency with a standardised alpha coefficient of 0.94. Two constructs within the scale were apparent; using the technology alone, and using the technology with the support of others. Scores on the scale were correlated with age and frequency of use of some technologies thereby supporting construct validity.\n\n\nCONCLUSIONS\nThe modified computer self efficacy scale has demonstrated reliability and construct validity for measuring the self efficacy of older people or people with disabilities when using everyday technologies. This tool has the potential to assist clinicians in identifying older patients who may be more open to using new technologies to maintain independence.",
"title": ""
},
{
"docid": "b12bae586bc49a12cebf11cca49c0386",
"text": "Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations—small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.",
"title": ""
},
{
"docid": "2959b7da07ce8b0e6825819566bce9ab",
"text": "Social isolation among the elderly is a concern in developed countries. Using a randomized trial, this study examined the effect of a social isolation prevention program on loneliness, depression, and subjective well-being of the elderly in Japan. Among the elderly people who relocated to suburban Tokyo, 63 who responded to a pre-test were randomized and assessed 1 and 6 months after the program. Four sessions of a group-based program were designed to prevent social isolation by improving community knowledge and networking with other participants and community \"gatekeepers.\" The Life Satisfaction Index A (LSI-A), Geriatric Depression Scale (GDS), Ando-Osada-Kodama (AOK) loneliness scale, social support, and other variables were used as outcomes of this study. A linear mixed model was used to compare 20 of the 21 people in the intervention group to 40 of the 42 in the control group, and showed that the intervention program had a significant positive effect on LSI-A, social support, and familiarity with services scores and a significant negative effect on AOK over the study period. The program had no significant effect on depression. The findings of this study suggest that programs aimed at preventing social isolation are effective when they utilize existing community resources, are tailor-made based on the specific needs of the individual, and target people who can share similar experiences.",
"title": ""
},
{
"docid": "42979dd6ad989896111ef4de8d26b2fb",
"text": "Online dating services let users expand their dating pool beyond their social network and specify important characteristics of potential partners. To assess compatibility, users share personal information — e.g., identifying details or sensitive opinions about sexual preferences or worldviews — in profiles or in one-on-one communication. Thus, participating in online dating poses inherent privacy risks. How people reason about these privacy risks in modern online dating ecosystems has not been extensively studied. We present the results of a survey we designed to examine privacy-related risks, practices, and expectations of people who use or have used online dating, then delve deeper using semi-structured interviews. We additionally analyzed 400 Tinder profiles to explore how these issues manifest in practice. Our results reveal tensions between privacy and competing user values and goals, and we demonstrate how these results can inform future designs.",
"title": ""
}
] |
scidocsrr
|
27f745614b07e8da1f32beb1a38fc404
|
A Novel Efficient Pairing-Free CP-ABE Based on Elliptic Curve Cryptography for IoT
|
[
{
"docid": "73063e5355c196921ee15bdc28b1aaf0",
"text": "Elliptic curve cryptosystems are more and more widespread in everyday-life applications. This trend should still gain momentum in coming years thanks to the exponential security enjoyed by these systems compared to the subexponential security of other systems such as RSA. For this reason, efficient elliptic curve arithmetic is still a hot topic for cryptographers. The core operation of elliptic curve cryptosystems is the scalar multiplication which multiplies some point on an elliptic curve by some (usually secret) scalar. When such an operation is implemented on an embedded system such as a smart card, it is subject to side channel attacks. To withstand such attacks, one must constrain the scalar multiplication algorithm to be regular, namely to have an operation flow independent of the input scalar. A large amount of work has been published that focus on efficient and regular scalar multiplication and the choice leading to the best performances in practice is not clear. In this paper, we look into this question for general-form elliptic curves over large prime fields and we complete the current state-of-the-art. One of the fastest low-memory algorithms in the current literature is the Montgomery ladder using co-Z Jacobian arithmetic with X and Y coordinates only. We detail the regular implementation of this algorithm with various trade-offs and we introduce a new binary algorithm achieving comparable performances. For implementations that are less constrained in memory, windowing techniques and signed exponent recoding enable reaching better timings. We survey regular algorithms based on such techniques and we discuss their security with respect to side-channel attacks. On the whole, our work give a clear view of the currently best time-memory trade-offs for regular implementation of scalar multiplication over prime-field elliptic curves.",
"title": ""
}
] |
[
{
"docid": "984b2f763a14331c5da36cd08f7482de",
"text": "This review of 68 studies compares the methodologies used for the identification and quantification of microplastics from the marine environment. Three main sampling strategies were identified: selective, volume-reduced, and bulk sampling. Most sediment samples came from sandy beaches at the high tide line, and most seawater samples were taken at the sea surface using neuston nets. Four steps were distinguished during sample processing: density separation, filtration, sieving, and visual sorting of microplastics. Visual sorting was one of the most commonly used methods for the identification of microplastics (using type, shape, degradation stage, and color as criteria). Chemical and physical characteristics (e.g., specific density) were also used. The most reliable method to identify the chemical composition of microplastics is by infrared spectroscopy. Most studies reported that plastic fragments were polyethylene and polypropylene polymers. Units commonly used for abundance estimates are \"items per m(2)\" for sediment and sea surface studies and \"items per m(3)\" for water column studies. Mesh size of sieves and filters used during sampling or sample processing influence abundance estimates. Most studies reported two main size ranges of microplastics: (i) 500 μm-5 mm, which are retained by a 500 μm sieve/net, and (ii) 1-500 μm, or fractions thereof that are retained on filters. We recommend that future programs of monitoring continue to distinguish these size fractions, but we suggest standardized sampling procedures which allow the spatiotemporal comparison of microplastic abundance across marine environments.",
"title": ""
},
{
"docid": "ffdeaeb1df2fbaaa203fc19b08c69cbe",
"text": "In the past decade, the population of disability grew rapidly and became one of main problems in our society. The significant amounts of impaired patients include not only lower limb but also upper limb impairment such as the motor function of arm and hand. However, physical therapy requires high personal expenses and takes long time to complete the rehabilitation. In order to solve the problem mentioned above, a wearable hand exoskeleton system was developed in this paper. The hand exoskeleton system typically designed to accomplish the requirements for rehabilitation. Figure 1 shows the prototype of hand exoskeleton system which can be easily worn on human hand. The developed exoskeleton finger can provide bi-directional movements in bending and extension motion for all joints of the finger through cable transmission. The kinematic relations between the fingertip and metacarpal was derived and verified. Moreover, the construction of control system is presented in this paper. The preliminary experiment results for finger's position control have demonstrated that the proposed device is capable of accommodating to the aforementioned variables.",
"title": ""
},
{
"docid": "714242b8967ef68c022e568ef2fe01dd",
"text": "Visual localization is a key step in many robotics pipelines, allowing the robot to (approximately) determine its position and orientation in the world. An efficient and scalable approach to visual localization is to use image retrieval techniques. These approaches identify the image most similar to a query photo in a database of geo-tagged images and approximate the query’s pose via the pose of the retrieved database image. However, image retrieval across drastically different illumination conditions, e.g. day and night, is still a problem with unsatisfactory results, even in this age of powerful neural models. This is due to a lack of a suitably diverse dataset with true correspondences to perform end-to-end learning. A recent class of neural models allows for realistic translation of images among visual domains with relatively little training data and, most importantly, without ground-truth pairings. In this paper, we explore the task of accurately localizing images captured from two traversals of the same area in both day and night. We propose ToDayGAN – a modified imagetranslation model to alter nighttime driving images to a more useful daytime representation. We then compare the daytime and translated night images to obtain a pose estimate for the night image using the known 6-DOF position of the closest day image. Our approach improves localization performance by over 250% compared the current state-of-the-art, in the context of standard metrics in multiple categories.",
"title": ""
},
{
"docid": "c194e9c91d4a921b42ddacfc1d5a214f",
"text": "Smartphone applications' energy efficiency is vital, but many Android applications suffer from serious energy inefficiency problems. Locating these problems is labor-intensive and automated diagnosis is highly desirable. However, a key challenge is the lack of a decidable criterion that facilitates automated judgment of such energy problems. Our work aims to address this challenge. We conducted an in-depth study of 173 open-source and 229 commercial Android applications, and observed two common causes of energy problems: missing deactivation of sensors or wake locks, and cost-ineffective use of sensory data. With these findings, wepropose an automated approach to diagnosing energy problems in Android applications. Our approach explores an application's state space by systematically executing the application using Java PathFinder (JPF). It monitors sensor and wake lock operations to detect missing deactivation of sensors and wake locks. It also tracks the transformation and usage of sensory data and judges whether they are effectively utilized by the application using our state-sensitive data utilization metric. In this way, our approach can generate detailed reports with actionable information to assist developers in validating detected energy problems. We built our approach as a tool, GreenDroid, on top of JPF. Technically, we addressed the challenges of generating user interaction events and scheduling event handlers in extending JPF for analyzing Android applications. We evaluated GreenDroid using 13 real-world popular Android applications. GreenDroid completed energy efficiency diagnosis for these applications in a few minutes. It successfully located real energy problems in these applications, and additionally found new unreported energy problems that were later confirmed by developers.",
"title": ""
},
{
"docid": "ed28faf2ff89ac4da642593e1b7eef9c",
"text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.",
"title": ""
},
{
"docid": "83b50f380f500bf6e140b3178431f0c6",
"text": "Leader election protocols are a fundamental building block for replicated distributed services. They ease the design of leader-based coordination protocols that tolerate failures. In partially synchronous systems, designing a leader election algorithm, that does not permit multiple leaders while the system is unstable, is a complex task. As a result many production systems use third-party distributed coordination services, such as ZooKeeper and Chubby, to provide a reliable leader election service. However, adding a third-party service such as ZooKeeper to a distributed system incurs additional operational costs and complexity. ZooKeeper instances must be kept running on at least three machines to ensure its high availability. In this paper, we present a novel leader election protocol using NewSQL databases for partially synchronous systems, that ensures at most one leader at any given time. The leader election protocol uses the database as distributed shared memory. Our work enables distributed systems that already use NewSQL databases to save the operational overhead of managing an additional third-party service for leader election. Our main contribution is the design, implementation and validation of a practical leader election algorithm, based on NewSQL databases, that has performance comparable to a leader election implementation using a state-of-the-art distributed coordination service, ZooKeeper.",
"title": ""
},
{
"docid": "bd111864fb4081b79e17ccd517157413",
"text": "We describe techniques for training high-quality image denoising models that require only single instances of corrupted images as training data. Inspired by a recent technique that removes the need for supervision through image pairs by employing networks with a “blind spot” in the receptive field, we address two of its shortcomings: inefficient training and somewhat disappointing final denoising performance. This is achieved through a novel blind-spot convolutional network architecture that allows efficient self-supervised training, as well as application of Bayesian distribution prediction on output colors. Together, they bring the selfsupervised model on par with fully supervised deep learning techniques in terms of both quality and training speed in the case of i.i.d. Gaussian noise.",
"title": ""
},
{
"docid": "1bf801e8e0348ccd1e981136f604dd18",
"text": "Sketch recognition is one of the integral components used by law enforcement agencies in solving crime. In recent past, software generated composite sketches are being preferred as they are more consistent and faster to construct than hand drawn sketches. Matching these composite sketches to face photographs is a complex task because the composite sketches are drawn based on the witness description and lack minute details which are present in photographs. This paper presents a novel algorithm for matching composite sketches with photographs using transfer learning with deep learning representation. In the proposed algorithm, first the deep learning architecture based facial representation is learned using large face database of photos and then the representation is updated using small problem-specific training database. Experiments are performed on the extended PRIP database and it is observed that the proposed algorithm outperforms recently proposed approach and a commercial face recognition system.",
"title": ""
},
{
"docid": "b31723195f18a128e2de04918808601d",
"text": "Realistic secure processors, including those built for academic and commercial purposes, commonly realize an “attested execution” abstraction. Despite being the de facto standard for modern secure processors, the “attested execution” abstraction has not received adequate formal treatment. We provide formal abstractions for “attested execution” secure processors and rigorously explore its expressive power. Our explorations show both the expected and the surprising. On one hand, we show that just like the common belief, attested execution is extremely powerful, and allows one to realize powerful cryptographic abstractions such as stateful obfuscation whose existence is otherwise impossible even when assuming virtual blackbox obfuscation and stateless hardware tokens. On the other hand, we show that surprisingly, realizing composable two-party computation with attested execution processors is not as straightforward as one might anticipate. Specifically, only when both parties are equipped with a secure processor can we realize composable two-party computation. If one of the parties does not have a secure processor, we show that composable two-party computation is impossible. In practice, however, it would be desirable to allow multiple legacy clients (without secure processors) to leverage a server’s secure processor to perform a multi-party computation task. We show how to introduce minimal additional setup assumptions to enable this. Finally, we show that fair multi-party computation for general functionalities is impossible if secure processors do not have trusted clocks. When secure processors have trusted clocks, we can realize fair two-party computation if both parties are equipped with a secure processor; but if only one party has a secure processor (with a trusted clock), then fairness is still impossible for general functionalities.",
"title": ""
},
{
"docid": "4f43a692ff8f6aed3a3fc4521c86d35e",
"text": "LEARNING OBJECTIVES\nAfter reading this article, the participant should be able to: 1. Understand the challenges in restoring volume and structural integrity in rhinoplasty. 2. Identify the appropriate uses of various autografts in aesthetic and reconstructive rhinoplasty (septal cartilage, auricular cartilage, costal cartilage, calvarial and nasal bone, and olecranon process of the ulna). 3. Identify the advantages and disadvantages of each of these autografts.\n\n\nSUMMARY\nThis review specifically addresses the use of autologous grafts in rhinoplasty. Autologous materials remain the preferred graft material for use in rhinoplasty because of their high biocompatibility and low risk of infection and extrusion. However, these advantages should be counterbalanced with the concerns of donor-site morbidity, graft availability, and graft resorption.",
"title": ""
},
{
"docid": "11aec9f84ce629f204b57bcf18c5cd38",
"text": "Our legal question answering system combines legal information retrieval and textual entailment, and we propose a legal question answering system that exploits a deep convolutional neural network. We have evaluated our system using the training data from the competition on legal information extraction/entailment (COLIEE). The competition focuses on the legal information processing related to answering yes/no questions from Japanese legal bar exams, and it consists of two phases: legal ad-hoc information retrieval, and textual entailment. Phase 1 requires the identification of Japan civil law articles relevant to a legal bar exam query. For that phase, we have implemented TF-IDF and Ranking SVM. Phase 2 is to answer “Yes” or “No” to previously unseen queries, by comparing the meanings of queries with relevant articles. Our choice of features used for Phase 2 focuses on word embeddings, syntactic similarities and identification of negation/antonym relations. Phase 2 is a textual entailment problem, and we use a convolutional neural network with dropout regularization and Rectified Linear Units. To our knowledge, our study is the first approach adopting deep learning in the textual entailment field. Experimental evaluation demonstrates the effectiveness of the convolutional neural network and dropout regularization. The results show that our deep learning-based method outperforms the SVM-based supervised model.",
"title": ""
},
{
"docid": "1f7f0b82bf5822ee51313edfd1cb1593",
"text": "With the promise of meeting future capacity demands, 3-D massive-MIMO/full dimension multiple-input-multiple-output (FD-MIMO) systems have gained much interest in recent years. Apart from the huge spectral efficiency gain, 3-D massive-MIMO/FD-MIMO systems can also lead to significant reduction of latency, simplified multiple access layer, and robustness to interference. However, in order to completely extract the benefits of the system, accurate channel state information is critical. In this paper, a channel estimation method based on direction of arrival (DoA) estimation is presented for 3-D millimeter wave massive-MIMO orthogonal frequency division multiplexing (OFDM) systems. To be specific, the DoA is estimated using estimation of signal parameter via rotational invariance technique method, and the root mean square error of the DoA estimation is analytically characterized for the corresponding MIMO-OFDM system. An ergodic capacity analysis of the system in the presence of DoA estimation error is also conducted, and an optimum power allocation algorithm is derived. Furthermore, it is shown that the DoA-based channel estimation achieves a better performance than the traditional linear minimum mean squared error estimation in terms of ergodic throughput and minimum chordal distance between the subspaces of the downlink precoders obtained from the underlying channel and the estimated channel.",
"title": ""
},
{
"docid": "cc33bcc919e5878fa17fd17b63bb8a34",
"text": "This paper deals with mean-field Eshelby-based homogenization techniques for multi-phase composites and focuses on three subjects which in our opinion deserved more attention than they did in the existing literature. Firstly, for two-phase composites, that is when in a given representative volume element all the inclusions have the same material properties, aspect ratio and orientation, an interpolative double inclusion model gives perhaps the best predictions to date for a wide range of volume fractions and stiffness contrasts. Secondly, for multi-phase composites (including two-phase composites with non-aligned inclusions as a special case), direct homogenization schemes might lead to a non-symmetric overall stiffness tensor, while a two-step homogenization procedure gives physically acceptable results. Thirdly, a general procedure allows to formulate the thermo-elastic version of any homogenization model defined by its isothermal strain concentration tensors. For all three subjects, the theory is presented in detail and validated against experimental data or finite element results for numerous composite systems. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "19b041beb43aadfbde514dc5bb7f7da5",
"text": "The European Train Control System (ETCS) is the leading signaling system for train command and control. In the future, ETCS may be delivered over long-term evolution (LTE) networks. Thus, LTE performance offered to ETCS must be analyzed and confronted with the railway safety requirements. It is especially important to ensure the integrity of the ETCS data, i.e., to protect ETCS data against loss and corruption. In this article, various retransmission mechanisms are considered for providing end-to-end ETCS data integrity in LTE. These mechanisms are validated in simulations, which model worst-case conditions regarding train locations, traffic load, and base-station density. The simulation results show that ETCS data integrity requirements can be fulfilled even under these unfavorable conditions with the proper LTE mechanisms.",
"title": ""
},
{
"docid": "fac539d4214828534e04da744cb67db3",
"text": "This letter proposes a novel design approach of wideband 90 ° phase shifter, which comprises a stepped impedance open stub (SIOS) and a coupled-line with weak coupling. The result of analyses demonstrates that the bandwidths of return loss (RL) and phase deviation (PD) can be expanded by increasing the impedance ratio of the SIOS and the coupling strength of the coupled-line. For RL > 10 dB, insertion loss (IL) 1.1 dB, and PD of ±5°, the fabricated microstrip single-layer phase shifter exhibits bandwidth of 105% from 0.75 to 2.4 GHz.",
"title": ""
},
{
"docid": "5daeccb1a01df4f68f23c775828be41d",
"text": "This article surveys the research and development of Engineered Cementitious Composites (ECC) over the last decade since its invention in the early 1990’s. The importance of micromechanics in the materials design strategy is emphasized. Observations of unique characteristics of ECC based on a broad range of theoretical and experimental research are examined. The advantageous use of ECC in certain categories of structural, and repair and retrofit applications is reviewed. While reflecting on past advances, future challenges for continued development and deployment of ECC are noted. This article is based on a keynote address given at the International Workshop on Ductile Fiber Reinforced Cementitious Composites (DFRCC) – Applications and Evaluations, sponsored by the Japan Concrete Institute, and held in October 2002 at Takayama, Japan.",
"title": ""
},
{
"docid": "67f64865103301d730fdbcb6338d0db2",
"text": "Access to the internet and WWW is growing extensively, which results in heavy network traffic. To reduce the network traffic proxy server is used. Proxy server reduces the load of server. If the cache replacement algorithm of proxy server’s cache is efficient then proxy server will be helpful to reduce the network traffic in more efficient manner. In this paper we are considering proxy server cache to be Level 1 (L1) cache and storage cache of proxy server to be Level 2 (L2) cache. For collecting the real trace proxy server is used in an organization. Log of proxy server gives the information of various URLs accessed by various clients with time. For performing experiments various URLs were given a numeric identity. This paper analyzes the behavior of various replacement algorithms. The replacement algorithms taken into consideration are Least Recently Used (LRU), Least Frequently Used (LFU), First In First Out (FIFO). This paper proposes a new Replacement Algorithm named MFMR (Most Frequent with Maximum Reusability) for L1 which works about 16% better than existing algorithms considered in this paper. This paper also proposes a new replacement policy for L2 named AF_LRU (Average Frequency, Least Recently Used). Simulation results shows that pair of MFMR and AF-LRU is approximately 28% better than other existing pairs of replacement algorithms considered. Key WordLevel 1 Cache (L1); Level 2 Cache (L2); web access pattern; Replacement Algorithm; Proxy server; client.",
"title": ""
},
{
"docid": "3fae9d0778c9f9df1ae51ad3b5f62a05",
"text": "This paper argues for the utility of back-end driven onloading to the edge as a way to address bandwidth use and latency challenges for future device-cloud interactions. Supporting such edge functions (EFs) requires solutions that can provide (i) fast and scalable EF provisioning and (ii) strong guarantees for the integrity of the EF execution and confidentiality of the state stored at the edge. In response to these goals, we (i) present a detailed design space exploration of the current technologies that can be leveraged in the design of edge function platforms (EFPs), (ii) develop a solution to address security concerns of EFs that leverages emerging hardware support for OS agnostic trusted execution environments such as Intel SGX enclaves, and (iii) propose and evaluate AirBox, a platform for fast, scalable and secure onloading of edge functions.",
"title": ""
},
{
"docid": "a091e8885bd30e58f6de7d14e8170199",
"text": "This paper represents the design and implementation of an indoor based navigation system for visually impaired people using a path finding algorithm and a wearable cap. This development of the navigation system consists of two modules: a Wearable part and a schematic of the area where the navigation system works by guiding the user. The wearable segment consists of a cap designed with IR receivers, an Arduino Nano processor, a headphone and an ultrasonic sensor. The schematic segment plans for the movement directions inside a room by dividing the room area into cells with a predefined matrix containing location information. For navigating the user, sixteen IR transmitters which continuously monitor the user position are placed at equal interval in the XY (8 in X-plane and 8 in Y-plane) directions of the indoor environment. A Braille keypad is used by the user where he gave the cell number for determining destination position. A path finding algorithm has been developed for determining the position of the blind person and guide him/her to his/her destination. The developed algorithm detects the position of the user by receiving continuous data from transmitter and guide the user to his/her destination by voice command. The ultrasonic sensor mounted on the cap detects the obstacles along the pathway of the visually impaired person. This proposed navigation system does not require any complex infrastructure design or the necessity of holding any extra assistive device by the user (i.e. augmented cane, smartphone, cameras). In the proposed design, prerecorded voice command will provide movement guideline to every edge of the indoor environment according to the user's destination choice. This makes this navigation system relatively simple and user friendly for those who are not much familiar with the most advanced technology and people with physical disabilities. Moreover, this proposed navigation system does not need GPS or any telecommunication networks which makes it suitable for use in rural areas where there is no telecommunication network coverage. In conclusion, the proposed system is relatively cheaper to implement in comparison to other existing navigation system, which will contribute to the betterment of the visually impaired people's lifestyle of developing and under developed countries.",
"title": ""
}
] |
scidocsrr
|
a7ecc679e00a090a141312f80c738635
|
PowerSpy: Location Tracking using Mobile Device Power Analysis
|
[
{
"docid": "5e286453dfe55de305b045eaebd5f8fd",
"text": "Target tracking is an important element of surveillance, guidance or obstacle avoidance, whose role is to determine the number, position and movement of targets. The fundamental building block of a tracking system is a filter for recursive state estimation. The Kalman filter has been flogged to death as the work-horse of tracking systems since its formulation in the 60's. In this talk we look beyond the Kalman filter at sequential Monte Carlo methods, collectively referred to as particle filters. Particle filters have become a popular method for stochastic dynamic estimation problems. This popularity can be explained by a wave of optimism among practitioners that traditionally difficult nonlinear/non-Gaussian dynamic estimation problems can now be solved accurately and reliably using this methodology. The computational cost of particle filters have often been considered their main disadvantage, but with ever faster computers and more efficient particle filter algorithms, this argument is becoming less relevant. The talk is organized in two parts. First we review the historical development and current status of particle filtering and its relevance to target tracking. We then consider in detail several tracking applications where conventional (Kalman based) methods appear inappropriate (unreliable or inaccurate) and where we instead need the potential benefits of particle filters. 1 The paper was written together with David Salmond, QinetiQ, UK.",
"title": ""
},
{
"docid": "74227709f4832c3978a21abb9449203b",
"text": "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor.",
"title": ""
}
] |
[
{
"docid": "64a730ce8aad5d4679409be43a291da7",
"text": "Background In the last years, it has been seen a shifting on society's consumption patterns, from mass consumption to second-hand culture. Moreover, consumer's perception towards second-hand stores, has been changing throughout the history of second-hand markets, according to the society's values prevailing in each time. Thus, the purchase intentions regarding second-hand clothes are influence by motivational and moderating factors according to the consumer's perception. Therefore, it was employed the theory of Guiot and Roux (2010) on motivational factors towards second-hand shopping and previous researches on moderating factors towards second-hand shopping. Purpose The purpose of this study is to explore consumer's perception and their purchase intentions towards second-hand clothing stores. Method For this, a qualitative and abductive approach was employed, combined with an exploratory design. Semi-structured face-to-face interviews were conducted utilizing a convenience sampling approach. Conclusion The findings show that consumers perception and their purchase intentions are influenced by their age and the environment where they live. However, the environment affect people in different ways. From this study, it could be found that elderly consumers are influenced by values and beliefs towards second-hand clothes. Young people are very influenced by the concept of fashion when it comes to second-hand clothes. For adults, it could be observed that price and the sense of uniqueness driver their decisions towards second-hand clothes consumption. The main motivational factor towards second-hand shopping was price. On the other hand, risk of contamination was pointed as the main moderating factor towards second-hand purchase. The study also revealed two new motivational factors towards second-hand clothing shopping, such charity and curiosity. Managers of second-hand clothing stores can make use of these findings to guide their decisions, especially related to improvements that could be done in order to make consumers overcoming the moderating factors towards second-hand shopping. The findings of this study are especially useful for second-hand clothing stores in Borås, since it was suggested couple of improvements for those stores based on the participant's opinions.",
"title": ""
},
{
"docid": "7ddc7a3fffc582f7eee1d0c29914ba1a",
"text": "Cyclic neutropenia is an uncommon hematologic disorder characterized by a marked decrease in the number of neutrophils in the peripheral blood occurring at regular intervals. The neutropenic phase is characteristically associated with clinical symptoms such as recurrent fever, malaise, headaches, anorexia, pharyngitis, ulcers of the oral mucous membrane, and gingival inflammation. This case report describes a Japanese girl who has this disease and suffers from periodontitis and oral ulceration. Her case has been followed up for the past 5 years from age 7 to 12. The importance of regular oral hygiene, careful removal of subgingival plaque and calculus, and periodic and thorough professional mechanical tooth cleaning was emphasized to arrest the progress of periodontal breakdown. Local antibiotic application with minocycline ointment in periodontal pockets was beneficial as an ancillary treatment, especially during neutropenic periods.",
"title": ""
},
{
"docid": "75060c7027db4e75bc42f3f3c84cad9b",
"text": "In this paper, we investigate whether superior performance on corporate social responsibility (CSR) strategies leads to better access to finance. We hypothesize that better access to finance can be attributed to a) reduced agency costs due to enhanced stakeholder engagement and b) reduced informational asymmetry due to increased transparency. Using a large cross-section of firms, we find that firms with better CSR performance face significantly lower capital constraints. Moreover, we provide evidence that both of the hypothesized mechanisms, better stakeholder engagement and transparency around CSR performance, are important in reducing capital constraints. The results are further confirmed using several alternative measures of capital constraints, a paired analysis based on a ratings shock to CSR performance, an instrumental variables and also a simultaneous equations approach. Finally, we show that the relation is driven by both the social and the environmental dimension of CSR.",
"title": ""
},
{
"docid": "66382b88e0faa573251d5039ccd65d6c",
"text": "In this communication, we present a new circularly-polarized array antenna using 2×2 linearly-polarized sub grid arrays in a low temperature co-fired ceramic technology for highly-integrated 60-GHz radio. The sub grid arrays are sequentially rotated and excited with a 90°-phase increment to radiate circularly-polarized waves. The feeding network of the array antenna is based on stripline quarter-wave matched T-junctions. The array antenna has a size of 15×15×0.9 mm3. Simulated and measured results confirm wide impedance, axial ratio, pattern, and gain bandwidths.",
"title": ""
},
{
"docid": "6766977de80074325165a82eeb08d671",
"text": "We synthesized the literature on gamification of education by conducting a review of the literature on gamification in the educational and learning context. Based on our review, we identified several game design elements that are used in education. These game design elements include points, levels/stages, badges, leaderboards, prizes, progress bars, storyline, and feedback. We provided examples from the literature to illustrate the application of gamification in the educational context.",
"title": ""
},
{
"docid": "f83a16d393c78d6ba0e65a4659446e7e",
"text": "Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https://bitbucket.org/columbiadvmm/cdc.",
"title": ""
},
{
"docid": "b8def7be21f014693589ae99385412dd",
"text": "Automatic image captioning has received increasing attention in recent years. Although there are many English datasets developed for this problem, there is only one Turkish dataset and it is very small compared to its English counterparts. Creating a new dataset for image captioning is a very costly and time consuming task. This work is a first step towards transferring the available, large English datasets into Turkish. We translated English captioning datasets into Turkish by using an automated translation tool and we trained an image captioning model on the automatically obtained Turkish captions. Our experiments show that this model yields the best performance so far on Turkish captioning.",
"title": ""
},
{
"docid": "8dfdd829881074dc002247c9cd38eba8",
"text": "The limited battery lifetime of modern embedded systems and mobile devices necessitates frequent battery recharging or replacement. Solar energy and small-size photovoltaic (PV) systems are attractive solutions to increase the autonomy of embedded and personal devices attempting to achieve perpetual operation. We present a battery less solar-harvesting circuit that is tailored to the needs of low-power applications. The harvester performs maximum-power-point tracking of solar energy collection under nonstationary light conditions, with high efficiency and low energy cost exploiting miniaturized PV modules. We characterize the performance of the circuit by means of simulation and extensive testing under various charging and discharging conditions. Much attention has been given to identify the power losses of the different circuit components. Results show that our system can achieve low power consumption with increased efficiency and cheap implementation. We discuss how the scavenger improves upon state-of-the-art technology with a measured power consumption of less than 1 mW. We obtain increments of global efficiency up to 80%, diverging from ideality by less than 10%. Moreover, we analyze the behavior of super capacitors. We find that the voltage across the supercapacitor may be an unreliable indicator for the stored energy under some circumstances, and this should be taken into account when energy management policies are used.",
"title": ""
},
{
"docid": "249a09e24ce502efb4669603b54b433d",
"text": "Deep Neural Networks (DNNs) are universal function approximators providing state-ofthe-art solutions on wide range of applications. Common perceptual tasks such as speech recognition, image classification, and object tracking are now commonly tackled via DNNs. Some fundamental problems remain: (1) the lack of a mathematical framework providing an explicit and interpretable input-output formula for any topology, (2) quantification of DNNs stability regarding adversarial examples (i.e. modified inputs fooling DNN predictions whilst undetectable to humans), (3) absence of generalization guarantees and controllable behaviors for ambiguous patterns, (4) leverage unlabeled data to apply DNNs to domains where expert labeling is scarce as in the medical field. Answering those points would provide theoretical perspectives for further developments based on a common ground. Furthermore, DNNs are now deployed in tremendous societal applications, pushing the need to fill this theoretical gap to ensure control, reliability, and interpretability. 1 ar X iv :1 71 0. 09 30 2v 3 [ st at .M L ] 6 N ov 2 01 7",
"title": ""
},
{
"docid": "b8cf5e3802308fe941848fea51afddab",
"text": "Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to even a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). In particular, we introduce two novel methods to create these toxic signs. First, we propose Out-of-Distribution attacks, which expand the scope of adversarial examples by enabling the adversary to generate these starting from an arbitrary point in the image space compared to prior attacks which are restricted to existing training/test data (In-Distribution). Second, we present the Lenticular Printing attack, which relies on an optical phenomenon to deceive the traffic sign recognition system. We extensively evaluate the effectiveness of the proposed attacks in both virtual and real-world settings and consider both white-box and black-box threat models. Our results demonstrate that the proposed attacks are successful under both settings and threat models. We further show that Out-of-Distribution attacks can outperform In-Distribution attacks on classifiers defended using the adversarial training defense, exposing a new attack vector for these defenses.",
"title": ""
},
{
"docid": "43e5146e4a7723cf391b013979a1da32",
"text": "The notions of disintegration and Bayesian inversion are fundamental in conditional probability theory. They produce channels, as conditional probabilities, from a joint state, or from an already given channel (in opposite direction). These notions exist in the literature, in concrete situations, but are presented here in abstract graphical formulations. The resulting abstract descriptions are used for proving basic results in conditional probability theory. The existence of disintegration and Bayesian inversion is discussed for discrete probability, and also for measure-theoretic probability — via standard Borel spaces and via likelihoods. Finally, the usefulness of disintegration and Bayesian inversion is illustrated in several examples.",
"title": ""
},
{
"docid": "0321ef8aeb0458770cd2efc35615e11c",
"text": "Entity-relationship-structured data is becoming more important on the Web. For example, large knowledge bases have been automatically constructed by information extraction from Wikipedia and other Web sources. Entities and relationships can be represented by subject-property-object triples in the RDF model, and can then be precisely searched by structured query languages like SPARQL. Because of their Boolean-match semantics, such queries often return too few or even no results. To improve recall, it is thus desirable to support users by automatically relaxing or reformulating queries in such a way that the intention of the original user query is preserved while returning a sufficient number of ranked results. In this paper we describe comprehensive methods to relax SPARQL-like triplepattern queries in a fully automated manner. Our framework produces a set of relaxations by means of statistical language models for structured RDF data and queries. The query processing algorithms merge the results of different relaxations into a unified result list, with ranking based on any ranking function for structured queries over RDF-data. Our experimental evaluation, with two different datasets about movies and books, shows the effectiveness of the automatically generated relaxations and the improved quality of query results based on assessments collected on the Amazon Mechanical Turk platform.",
"title": ""
},
{
"docid": "290b56471b64e150e40211f7a51c1237",
"text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.",
"title": ""
},
{
"docid": "4c16117954f9782b3a22aff5eb50537a",
"text": "Domain transfer is an exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels. However, most successful applications to date require the two domains to be closely related (e.g., image-to-image, video-video), utilizing similar or shared networks to transform domain-specific properties like texture, coloring, and line shapes. Here, we demonstrate that it is possible to transfer across modalities (e.g., image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces. We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (e.g., variational autoencoder and a generative adversarial network). We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space. The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations. Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions.",
"title": ""
},
{
"docid": "3b7cfe02a34014c84847eea4790037e2",
"text": "Non-technical losses (NTL) such as electricity theft cause significant harm to our economies, as in some countries they may range up to 40% of the total electricity distributed. Detecting NTLs requires costly on-site inspections. Accurate prediction of NTLs for customers using machine learning is therefore crucial. To date, related research largely ignore that the two classes of regular and non-regular customers are highly imbalanced, that NTL proportions may change and mostly consider small data sets, often not allowing to deploy the results in production. In this paper, we present a comprehensive approach to assess three NTL detection models for different NTL proportions in large real world data sets of 100Ks of customers: Boolean rules, fuzzy logic and Support Vector Machine. This work has resulted in appreciable results that are about to be deployed in a leading industry solution. We believe that the considerations and observations made in this contribution are necessary for future smart meter research in order to report their effectiveness on imbalanced and large real world data sets.",
"title": ""
},
{
"docid": "aea4b65d1c30e80e7f60a52dbecc78f3",
"text": "The aim of this paper is to automate the car and the car parking as well. It discusses a project which presents a miniature model of an automated car parking system that can regulate and manage the number of cars that can be parked in a given space at any given time based on the availability of parking spot. Automated parking is a method of parking and exiting cars using sensing devices. The entering to or leaving from the parking lot is commanded by an Android based application. We have studied some of the existing systems and it shows that most of the existing systems aren't completely automated and require a certain level of human interference or interaction in or with the system. The difference between our system and the other existing systems is that we aim to make our system as less human dependent as possible by automating the cars as well as the entire parking lot, on the other hand most existing systems require human personnel (or the car owner) to park the car themselves. To prove the effectiveness of the system proposed by us we have developed and presented a mathematical model which will be discussed in brief further in the paper.",
"title": ""
},
{
"docid": "bb94ef2ab26fddd794a5b469f3b51728",
"text": "This study examines the treatment outcome of a ten weeks dance movement therapy intervention on quality of life (QOL). The multicentred study used a subject-design with pre-test, post-test, and six months follow-up test. 162 participants who suffered from stress were randomly assigned to the dance movement therapy treatment group (TG) (n = 97) and the wait-listed control group (WG) (65). The World Health Organization Quality of Life Questionnaire 100 (WHOQOL-100) and Munich Life Dimension List were used in both groups at all three measurement points. Repeated measures ANOVA revealed that dance movement therapy participants in all QOL dimensions always more than the WG. In the short term, they significantly improved in the Psychological domain (p > .001, WHOQOL; p > .01, Munich Life Dimension List), Social relations/life (p > .10, WHOQOL; p > .10, Munich Life Dimension List), Global value (p > .05, WHOQOL), Physical health (p > .05, Munich Life Dimension List), and General life (p > .10, Munich Life Dimension List). In the long term, dance movement therapy significantly enhanced the psychological domain (p > .05, WHOQOL; p > .05, Munich Life Dimension List), Spirituality (p > .10, WHOQOL), and General life (p > .05, Munich Life Dimension List). Dance movement therapy is effective in the shortand long-term to improve QOL. © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "1b5bc53b1039f3e7aecbc8dcb2f3b9a8",
"text": "Agricultural lands occupy 37% of the earth's land surface. Agriculture accounts for 52 and 84% of global anthropogenic methane and nitrous oxide emissions. Agricultural soils may also act as a sink or source for CO2, but the net flux is small. Many agricultural practices can potentially mitigate greenhouse gas (GHG) emissions, the most prominent of which are improved cropland and grazing land management and restoration of degraded lands and cultivated organic soils. Lower, but still significant mitigation potential is provided by water and rice management, set-aside, land use change and agroforestry, livestock management and manure management. The global technical mitigation potential from agriculture (excluding fossil fuel offsets from biomass) by 2030, considering all gases, is estimated to be approximately 5500-6000Mt CO2-eq.yr-1, with economic potentials of approximately 1500-1600, 2500-2700 and 4000-4300Mt CO2-eq.yr-1 at carbon prices of up to 20, up to 50 and up to 100 US$ t CO2-eq.-1, respectively. In addition, GHG emissions could be reduced by substitution of fossil fuels for energy production by agricultural feedstocks (e.g. crop residues, dung and dedicated energy crops). The economic mitigation potential of biomass energy from agriculture is estimated to be 640, 2240 and 16 000Mt CO2-eq.yr-1 at 0-20, 0-50 and 0-100 US$ t CO2-eq.-1, respectively.",
"title": ""
},
{
"docid": "d9214591462b0780ede6d58dab42f48c",
"text": "Software testing in general and graphical user interface (GUI) testing in particular is one of the major challenges in the lifecycle of any software system. GUI testing is inherently more difficult than the traditional and command-line interface testing. Some of the factors that make GUI testing different from the traditional software testing and significantly more difficult are: a large number of objects, different look and feel of objects, many parameters associated with each object, progressive disclosure, complex inputs from multiple sources, and graphical outputs. The existing testing techniques for the creation and management of test suites need to be adapted/enhanced for GUIs, and new testing techniques are desired to make the creation and management of test suites more efficient and effective. In this article, a methodology is proposed to create test suites for a GUI. The proposed methodology organizes the testing activity into various levels. The tests created at a particular level can be reused at higher levels. This methodology extends the notion of modularity and reusability to the testing phase. The organization and management of the created test suites resembles closely to the structure of the GUI under test.",
"title": ""
},
{
"docid": "514d9326cb54cec16f4dfb05deca3895",
"text": "Photo publishing in Social Networks and other Web2.0 applications has become very popular due to the pervasive availability of cheap digital cameras, powerful batch upload tools and a huge amount of storage space. A portion of uploaded images are of a highly sensitive nature, disclosing many details of the users' private life. We have developed a web service which can detect private images within a user's photo stream and provide support in making privacy decisions in the sharing context. In addition, we present a privacy-oriented image search application which automatically identifies potentially sensitive images in the result set and separates them from the remaining pictures.",
"title": ""
}
] |
scidocsrr
|
ef65b501da87906c14eb3ea4fbc25418
|
Can Cybersecurity Be Proactive? A Big Data Approach and Challenges
|
[
{
"docid": "26b0038c375eaa619ff584360f401674",
"text": "We examine the code base of the OpenBSD operating system to determine whether its security is increasing over time. We measure the rate at which new code has been introduced and the rate at which vulnerabilities have been reported over the last 7.5 years and fifteen versions. We learn that 61% of the lines of code in today’s OpenBSD are foundational: they were introduced prior to the release of the initial version we studied and have not been altered since. We also learn that 62% of reported vulnerabilities were present when the study began and can also be considered to be foundational. We find strong statistical evidence of a decrease in the rate at which foundational vulnerabilities are being reported. However, this decrease is anything but brisk: foundational vulnerabilities have a median lifetime of at least 2.6 years. Finally, we examined the density of vulnerabilities in the code that was altered/introduced in each version. The densities ranged from 0 to 0.033 vulnerabilities reported per thousand lines of code. These densities will increase as more vulnerabilities are reported. ∗This work is sponsored by the I3P under Air Force Contract FA8721-05-0002. Opinions, interpretations, conclusions and recommendations are those of the author(s) and are not necessarily endorsed by the United States Government. †This work was produced under the auspices of the Institute for Information Infrastructure Protection (I3P) research program. The I3P is managed by Dartmouth College, and supported under Award number 2003-TK-TX-0003 from the U.S. Department of Homeland Security, Science and Technology Directorate. Points of view in this document are those of the authors and do not necessarily represent the official position of the U.S. Department of Homeland Security, the Science and Technology Directorate, the I3P, or Dartmouth College. ‡Currently at the University of Cambridge",
"title": ""
}
] |
[
{
"docid": "a17bf7467da65eede493d543a335c9ae",
"text": "Recently interest has grown in applying activity theory, the leading theoretical approach in Russian psychology, to issues of human-computer interaction. This chapter analyzes why experts in the field are looking for an alternative to the currently dominant cognitive approach. The basic principles of activity theory are presented and their implications for human-computer interaction are discussed. The chapter concludes with an outline of the potential impact of activity theory on studies and design of computer use in real-life settings.",
"title": ""
},
{
"docid": "416a3d01c713a6e751cb7893c16baf21",
"text": "BACKGROUND\nAnaemia is associated with poor cancer control, particularly in patients undergoing radiotherapy. We investigated whether anaemia correction with epoetin beta could improve outcome of curative radiotherapy among patients with head and neck cancer.\n\n\nMETHODS\nWe did a multicentre, double-blind, randomised, placebo-controlled trial in 351 patients (haemoglobin <120 g/L in women or <130 g/L in men) with carcinoma of the oral cavity, oropharynx, hypopharynx, or larynx. Patients received curative radiotherapy at 60 Gy for completely (R0) and histologically incomplete (R1) resected disease, or 70 Gy for macroscopically incompletely resected (R2) advanced disease (T3, T4, or nodal involvement) or for primary definitive treatment. All patients were assigned to subcutaneous placebo (n=171) or epoetin beta 300 IU/kg (n=180) three times weekly, from 10-14 days before and continuing throughout radiotherapy. The primary endpoint was locoregional progression-free survival. We assessed also time to locoregional progression and survival. Analysis was by intention to treat.\n\n\nFINDINGS\n148 (82%) patients given epoetin beta achieved haemoglobin concentrations higher than 140 g/L (women) or 150 g/L (men) compared with 26 (15%) given placebo. However, locoregional progression-free survival was poorer with epoetin beta than with placebo (adjusted relative risk 1.62 [95% CI 1.22-2.14]; p=0.0008). For locoregional progression the relative risk was 1.69 (1.16-2.47, p=0.007) and for survival was 1.39 (1.05-1.84, p=0.02).\n\n\nINTERPRETATION\nEpoetin beta corrects anaemia but does not improve cancer control or survival. Disease control might even be impaired. Patients receiving curative cancer treatment and given erythropoietin should be studied in carefully controlled trials.",
"title": ""
},
{
"docid": "b9404d66fa6cc759382c73d6ae16fc0c",
"text": "Aspect extraction is an important and challenging task in aspect-based sentiment analysis. Existing works tend to apply variants of topic models on this task. While fairly successful, these methods usually do not produce highly coherent aspects. In this paper, we present a novel neural approach with the aim of discovering coherent aspects. The model improves coherence by exploiting the distribution of word co-occurrences through the use of neural word embeddings. Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space. In addition, we use an attention mechanism to de-emphasize irrelevant words during training, further improving the coherence of aspects. Experimental results on real-life datasets demonstrate that our approach discovers more meaningful and coherent aspects, and substantially outperforms baseline methods on several evaluation tasks.",
"title": ""
},
{
"docid": "dca9a39a9fdf69825ab37196a8b8acea",
"text": "We contrast two seemingly distinct approaches to the task of question answering (QA) using Freebase: one based on information extraction techniques, the other on semantic parsing. Results over the same test-set were collected from two state-ofthe-art, open-source systems, then analyzed in consultation with those systems’ creators. We conclude that the differences between these technologies, both in task performance, and in how they get there, is not significant. This suggests that the semantic parsing community should target answering more compositional open-domain questions that are beyond the reach of more direct information extraction methods.",
"title": ""
},
{
"docid": "f87459e12d6dba8f3a04424c4db709f6",
"text": "The study of empathy, a translation of the term 'Einfühlung', originated in 19th century Germany in the sphere of aesthetics, and was followed by studies in psychology and then neuroscience. During the past decade the links between empathy and art have started to be investigated, but now from the neuroscientific perspective, and two different approaches have emerged. Recently, the primacy of the mirror neuron system and its association with automaticity and imitative, simulated movement has been envisaged. But earlier, a number of eminent art historians had pointed to the importance of cognitive responses to art; these responses might plausibly be subserved by alternative neural networks. Focusing here mainly on pictures depicting pain and evoking empathy, both approaches are considered by summarizing the evidence that either supports the involvement of the mirror neuron system, or alternatively suggests other neural networks are likely to be implicated. The use of such pictures in experimental studies exploring the underlying neural processes, however, raises a number of concerns, and suggests caution is exercised in drawing conclusions concerning the networks that might be engaged. These various networks are discussed next, taking into account the affective and sensory components of the pain experience, before concluding that both mirror neuron and alternative neural networks are likelyto be enlisted in the empathetic response to images of pain. A somewhat similar duality of spontaneous and cognitive processes may perhaps also be paralleled in the creation of such images. While noting that some have repudiated the neuroscientific approach to the subject, pictures are nevertheless shown here to represent an unusual but invaluable tool in the study of pain and empathy.",
"title": ""
},
{
"docid": "146d5e7a8079a0b5171d9bc2813f3052",
"text": "The Shape Boltzmann Machine (SBM) [1] has recently been introduced as a stateof-the-art model of foreground/background object shape. We extend the SBM to account for the foreground object’s parts. Our new model, the Multinomial SBM (MSBM), can capture both local and global statistics of part shapes accurately. We combine the MSBM with an appearance model to form a fully generative model of images of objects. Parts-based object segmentations are obtained simply by performing probabilistic inference in the model. We apply the model to two challenging datasets which exhibit significant shape and appearance variability, and find that it obtains results that are comparable to the state-of-the-art. There has been significant focus in computer vision on object recognition and detection e.g. [2], but a strong desire remains to obtain richer descriptions of objects than just their bounding boxes. One such description is a parts-based object segmentation, in which an image is partitioned into multiple sets of pixels, each belonging to either a part of the object of interest, or its background. The significance of parts in computer vision has been recognized since the earliest days of the field (e.g. [3, 4, 5]), and there exists a rich history of work on probabilistic models for parts-based segmentation e.g. [6, 7]. Many such models only consider local neighborhood statistics, however several models have recently been proposed that aim to increase the accuracy of segmentations by also incorporating prior knowledge about the foreground object’s shape [8, 9, 10, 11]. In such cases, probabilistic techniques often mainly differ in how accurately they represent and learn about the variability exhibited by the shapes of the object’s parts. Accurate models of the shapes and appearances of parts can be necessary to perform inference in datasets that exhibit large amounts of variability. In general, the stronger the models of these two components, the more performance is improved. A generative model has the added benefit of being able to generate samples, which allows us to visually inspect the quality of its understanding of the data and the problem. Recently, a generative probabilistic model known as the Shape Boltzmann Machine (SBM) has been used to model binary object shapes [1]. The SBM has been shown to constitute the state-of-the-art and it possesses several highly desirable characteristics: samples from the model look realistic, and it generalizes to generate samples that differ from the limited number of examples it is trained on. The main contributions of this paper are as follows: 1) In order to account for object parts we extend the SBM to use multinomial visible units instead of binary ones, resulting in the Multinomial Shape Boltzmann Machine (MSBM), and we demonstrate that the MSBM constitutes a strong model of parts-based object shape. 2) We combine the MSBM with an appearance model to form a fully generative model of images of objects (see Fig. 1). We show how parts-based object segmentations can be obtained simply by performing probabilistic inference in the model. We apply our model to two challenging datasets and find that in addition to being principled and fully generative, the model’s performance is comparable to the state-of-the-art.",
"title": ""
},
{
"docid": "1675d99203da64eab8f9722b77edaab5",
"text": "Estimation of the semantic relatedness between biomedical concepts has utility for many informatics applications. Automated methods fall into two broad categories: methods based on distributional statistics drawn from text corpora, and methods based on the structure of existing knowledge resources. In the former case, taxonomic structure is disregarded. In the latter, semantically relevant empirical information is not considered. In this paper, we present a method that retrofits the context vector representation of MeSH terms by using additional linkage information from UMLS/MeSH hierarchy such that linked concepts have similar vector representations. We evaluated the method relative to previously published physician and coder’s ratings on sets of MeSH terms. Our experimental results demonstrate that the retrofitted word vector measures obtain a higher correlation with physician judgments. The results also demonstrate a clear improvement on the correlation with experts’ ratings from the retrofitted vector representation in comparison to the vector representation without retrofitting.",
"title": ""
},
{
"docid": "fe31348bce3e6e698e26aceb8e99b2d8",
"text": "Web-based enterprises process events generated by millions of users interacting with their websites. Rich statistical data distilled from combining such interactions in near real-time generates enormous business value. In this paper, we describe the architecture of Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency, where the streams may be unordered or delayed. The system fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention. Photon guarantees that there will be no duplicates in the joined output (at-most-once semantics) at any point in time, that most joinable events will be present in the output in real-time (near-exact semantics), and exactly-once semantics eventually.\n Photon is deployed within Google Advertising System to join data streams such as web search queries and user clicks on advertisements. It produces joined logs that are used to derive key business metrics, including billing for advertisers. Our production deployment processes millions of events per minute at peak with an average end-to-end latency of less than 10 seconds. We also present challenges and solutions in maintaining large persistent state across geographically distant locations, and highlight the design principles that emerged from our experience.",
"title": ""
},
{
"docid": "b89c1b7f1cb697b720b4e15c176b6c28",
"text": "Following your need to always fulfil the inspiration to obtain everybody is now simple. Connecting to the internet is one of the short cuts to do. There are so many sources that offer and connect us to other world condition. As one of the products to see in internet, this website becomes a very available place to look for countless design of experiments using the taguchi approach 16 steps to product and process improvement sources. Yeah, sources about the books from countries in the world are provided.",
"title": ""
},
{
"docid": "b3fb6a69b1afbf42f7c6cdea5852b736",
"text": "This paper reviews the history of automotive technology development and human factors research, largely by decade, since the inception of the automobile. The human factors aspects were classified into primary driving task aspects (controls, displays, and visibility), driver workspace (seating and packaging, vibration, comfort, and climate), driver’s condition (fatigue and impairment), crash injury, advanced driver-assistance systems, external communication access, and driving behavior. For each era, the paper describes the SAE and ISO standards developed, the major organizations and conferences established, the major news stories affecting vehicle safety, and the general social context. The paper ends with a discussion of what can be learned from this historical review and the major issues to be addressed. A major contribution of this paper is more than 180 references that represent the foundation of automotive human factors, which should be considered core knowledge and should be familiar to those in the profession.",
"title": ""
},
{
"docid": "8d5d2f266181d456d4f71df26075a650",
"text": "Integrated architectures in the automotive and avionic domain promise improved resource utilization and enable a better tactic coordination of application subsystems compared to federated systems. In order to support safety-critical application subsystems, an integrated architecture needs to support fault-tolerant strategies that enable the continued operation of the system in the presence of failures. The basis for the implementation and validation of fault-tolerant strategies is a fault hypothesis that identifies the fault containment regions, specifies the failure modes and provides realistic failure rate assumptions. This paper describes a fault hypothesis for integrated architectures, which takes into account the collocation of multiple software components on shared node computers. We argue in favor of a differentiation of fault containment regions for hardware and software faults. In addition, the fault hypothesis describes the assumptions concerning the respective frequencies of transient and permanent failures in consideration of recent semiconductor trends",
"title": ""
},
{
"docid": "9ad7bc553928a3eca2756accfa5c7695",
"text": "Network slices combine resource virtualization with the isolation level required by future 5G applications. In addition, the use of monitoring and data analytics help to maintain the required network performance, while reducing total cost of ownership. In this paper, an architecture to enable autonomic slice networking is presented. Extended nodes make local decisions close to network devices, whereas centralized domain systems collate and export metered data transparently to customer controllers, all of them leveraging customizable and isolated data analytics processes. Discovered knowledge can be applied for both proactive and reactive network slice reconfiguration, triggered either by service providers or customers, thanks to the interaction with state-of-the-art software-defined networking controllers and planning tools. The architecture is experimentally demonstrated by means of a complex use case for a multidomain multilayer multiprotocol label switching (MPLS)-over-optical network. In particular, the use case consists of the following observe–analyze–act loops: 1) proactive network slice rerouting after bit error rate (BER) degradation detection in a lightpath supporting a virtual link (vlink); 2) reactive core network restoration after optical link failure; and 3) reactive network slice rerouting after the degraded lightpath is restored. The proposed architecture is experimentally validated on a distributed testbed connecting premises in UPC (Spain) and CNIT (Italy).",
"title": ""
},
{
"docid": "af3c883f0538398dbd7a12495aa39bca",
"text": "BACKGROUND\nA technique of unilateral cleft lip repair is described. The repair draws from a variety of previously described repairs and adheres to a concept of anatomical subunits of the lip. Cases from within the spectrum of the deformity have been chosen from a series of 144 consecutive cases to demonstrate the applicability of the technique in all forms of unilateral cleft lip.\n\n\nMETHODS\nIncisions cross the lip perpendicular to the cutaneous roll at the cleft side peak of Cupid's bow of the medial lip and at the base of the philtral column of the lateral lip. Above this level, incisions ascend the lip to allow for approximation along a line symmetrical with the non-cleft-side philtral column. Incisions then ascend superolaterally bordering the lip columellar crease to the point of closure in the nostril sill. A Rose-Thompson lengthening effect occurs just above the level of the cutaneous roll. If necessary, a small triangle positioned just above the cutaneous roll is often used. Any central vermilion deficiency is augmented by a laterally based triangular vermilion flap from the lateral lip element.\n\n\nRESULTS\nSince January of 2000, this technique has been used in 144 consecutive unilateral cleft lip repairs. The inferior triangle is small (average, 1.24 mm; range, 0 to 2 mm). The technique can be applied to all degrees of unilateral cleft lip.\n\n\nCONCLUSIONS\nA technique of unilateral cleft lip repair is described. The repair allows for a repair line that ascends the lip at the seams of anatomical subunits.",
"title": ""
},
{
"docid": "b8f81b8274dc466114d945bb3a597fea",
"text": "SIGNIFICANCE\nNonalcoholic fatty liver disease (NAFLD), characterized by liver triacylglycerol build-up, has been growing in the global world in concert with the raised prevalence of cardiometabolic disorders, including obesity, diabetes, and hyperlipemia. Redox imbalance has been suggested to be highly relevant to NAFLD pathogenesis. Recent Advances: As a major health problem, NAFLD progresses to the more severe nonalcoholic steatohepatitis (NASH) condition and predisposes susceptible individuals to liver and cardiovascular disease. Although NAFLD represents the predominant cause of chronic liver disorders, the mechanisms of its development and progression remain incompletely understood, even if various scientific groups ascribed them to the occurrence of insulin resistance, dyslipidemia, inflammation, and apoptosis. Nevertheless, oxidative stress (OxS) more and more appears as the most important pathological event during NAFLD development and the hallmark between simple steatosis and NASH manifestation.\n\n\nCRITICAL ISSUES\nThe purpose of this article is to summarize recent developments in the understanding of NAFLD, essentially focusing on OxS as a major pathogenetic mechanism. Various attempts to translate reactive oxygen species (ROS) scavenging by antioxidants into experimental and clinical studies have yielded mostly encouraging results.\n\n\nFUTURE DIRECTIONS\nAlthough augmented concentrations of ROS and faulty antioxidant defense have been associated to NAFLD and related complications, mechanisms of action and proofs of principle should be highlighted to support the causative role of OxS and to translate its concept into the clinic. Antioxid. Redox Signal. 26, 519-541.",
"title": ""
},
{
"docid": "06044ef2950f169eba39687cd3e723c1",
"text": "Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is neovascularisation, the growth of abnormal new vessels. This paper describes an automated method for the detection of new vessels in retinal images. Two vessel segmentation approaches are applied, using the standard line operator and a novel modified line operator. The latter is designed to reduce false responses to non-vessel edges. Both generated binary vessel maps hold vital information which must be processed separately. This is achieved with a dual classification system. Local morphology features are measured from each binary vessel map to produce two separate feature sets. Independent classification is performed for each feature set using a support vector machine (SVM) classifier. The system then combines these individual classification outcomes to produce a final decision. Sensitivity and specificity results using a dataset of 60 images are 0.862 and 0.944 respectively on a per patch basis and 1.00 and 0.90 respectively on a per image basis.",
"title": ""
},
{
"docid": "009c628d26b06d7a1daf3eac104b2fe4",
"text": "A neutral templating route for preparing mesoporous molecular sieves is demonstrated based on hydrogen-bonding interactions and self-assembly between neutral primary amine micelles (S degrees ) and neutral inorganic precursors (l degrees ). The S degrees l degrees templating pathway produces ordered mesoporous materials with thicker framework walls, smaller x-ray scattering domain sizes, and substantially improved textural mesoporosities in comparison with M41S materials templated by quaternary ammonium cations of equivalent chain length. This synthetic strategy also allows for the facile, environmentally benign recovery of the cost-intensive template by simple solvent extraction methods. The S degrees 1 degrees templating route provides for the synthesis of other oxide mesostructures (such as aluminas) that may be less readily accessible by electrostatic templating pathways.",
"title": ""
},
{
"docid": "b7bf40c61ff4c73a8bbd5096902ae534",
"text": "—In therapeutic and functional applications transcutaneous electrical stimulation (TES) is still the most frequently applied technique for muscle and nerve activation despite the huge efforts made to improve implantable technologies. Stimulation electrodes play the important role in interfacing the tissue with the stimulation unit. Between the electrode and the excitable tissue there are a number of obstacles in form of tissue resistivities and permittivities that can only be circumvented by magnetic fields but not by electric fields and currents. However, the generation of magnetic fields needed for the activation of excitable tissues in the human body requires large and bulky equipment. TES devices on the other hand can be built cheap, small and light weight. The weak part in TES is the electrode that cannot be brought close enough to the excitable tissue and has to fulfill a number of requirements to be able to act as efficient as possible. The present review article summarizes the most important factors that influence efficient TES, presents and discusses currently used electrode materials, designs and configurations, and points out findings that have been obtained through modeling, simulation and testing.",
"title": ""
},
{
"docid": "db3b14f6298771b44506a17da57c21ae",
"text": "Virtuosos are human beings who exhibit exceptional performance in their field of activity. In particular, virtuosos are interesting for creativity studies because they are exceptional problem solvers. However, virtuosity is an under-studied field of human behaviour. Little is known about the processes involved to become a virtuoso, and in how they distinguish themselves from normal performers. Virtuosos exist in virtually all domains of human activities, and we focus in this chapter on the specific case of virtuosity in jazz improvisation. We first introduce some facts about virtuosos coming from physiology, and then focus on the case of jazz. Automatic generation of improvisation has long been a subject of study for computer science, and many techniques have been proposed to generate music improvisation in various genres. The jazz style in particular abounds with programs that create improvisations of a reasonable level. However, no approach so far exhibits virtuosolevel performance. We describe an architecture for the generation of virtuoso bebop phrases which integrates novel music generation mechanisms in a principled way. We argue that modelling such outstanding phenomena can contribute substantially to the understanding of creativity in humans and machines. 5.1 Virtuosos as Exceptional Humans 5.1.1 Virtuosity in Art There is no precise definition of virtuosity, but only a commonly accepted view that virtuosos are human beings that excel in their practice to the point of exhibiting exceptional performance. Virtuosity exists in virtually all forms of human activity. In painting, several artists use virtuosity as a means to attract the attention of their audience. Felice Varini paints on urban spaces in such a way that there is a unique viewpoint from which a spectator sees the painting as a perfect geometrical figure. The F. Pachet ( ) Sony CSL-Paris, 6, rue Amyot, 75005 Paris, France e-mail: [email protected] J. McCormack, M. d’Inverno (eds.), Computers and Creativity, DOI 10.1007/978-3-642-31727-9_5, © Springer-Verlag Berlin Heidelberg 2012 115",
"title": ""
},
{
"docid": "1bf2f9e48a67842412a3b32bb2dd3434",
"text": "Since Paul Broca, the relationship between mind and brain has been the central preoccupation of cognitive neuroscience. In the 19th century, recognition that mental faculties might be understood by observations of individuals with brain damage led to vigorous debates about the properties of mind. By the end of the First World War, neurologists had outlined basic frameworks for the neural organization of language, perception, and motor cognition. Geschwind revived these frameworks in the 1960s and by the 1980s, lesion studies had incorporated methods from experimental psychology, models from cognitive science, formalities from computational approaches, and early developments in structural brain imaging. Around the same time, functional neuroimaging entered the scene. Early xenon probes evolved to the present-day wonders of BOLD and perfusion imaging. In a quick two decades, driven by these technical advances, centers for cognitive neuroscience now dot the landscape, journals such as this one are thriving, and the annual meeting of the Society for Cognitive Neuroscience is overflowing. In these heady times, a group of young cognitive neuroscientists training at a center in which human lesion studies and functional neuroimaging are pursued with similar vigor inquire about the relative impact of these two methods on the field. Fellows and colleagues, in their article titled ‘‘Method matters: An empirical study of impact on cognitive neuroscience,’’ point out that the nature of the evidence derived from the two methods are different. Importantly, they have complementary strengths and weaknesses. A critical difference highlighted in their article is that functional imaging by necessity provides correlational data, whereas lesion studies can support necessity claims for a specific brain region in a particular function. The authors hypothesize that despite the obvious growth of functional imaging in the last decade or so, lesion studies would have a disproportionate impact on cognitive neuroscience because they offer the possibility of establishing a causal role for structure in behavior in a way that is difficult to establish using functional imaging. The authors did not confirm this hypothesis. Using bibliometric methods, they found that functional imaging studies were cited three times as often as lesion studies, in large part because imaging studies were more likely to be published in high-impact journals. Given the complementary nature of the evidence from both methods, they anticipated extensive cross-method references. However, they found a within-method bias to citations generally, and, furthermore, functional imaging articles cited lesion studies considerably less often than the converse. To confirm the trends indicated by Fellows and colleagues, I looked at the distribution of cognitive neuroscience methods in the abstracts accepted for the 2005 Annual Meeting of the Cognitive Neuroscience Society (see Figure 1). Imaging studies composed over a third of all abstracts, followed by electrophysiological studies, the bulk of which were event-related potential (ERP) and magnetoencephalogram (MEG) studies. Studies that used patient populations composed 16% of the abstracts. The patient studies were almost evenly split between those focused on understanding a disease (47%), such as autism or schizophrenia, and those in which structure–function relationships were a consideration (53%). These observations do not speak of the final impact of these studies, but they do point out the relative lack of patient-based studies, particularly those addressing basic cognitive neuroscience questions. Fellows and colleagues pose the following question: Despite the greater ‘‘in-principle’’ inferential strength of lesion than functional imaging studies, why in practice do they have less impact on the field? They suggest that sociologic and practical considerations, rather than scientific merit, might be at play. Here, I offer my speculations on the factors that contribute to the relative impact of these methods. These speculations are not intended to be comprehensive. Rather they are intended to begin conversations in response to the question posed by Fellows and colleagues. In my view, the disproportionate impact of functional imaging compared to lesion studies is driven by three factors: the appeal of novelty and technology, by ease of access to neural data, and, in a subtle way, to the pragmatics of hypothesis testing. First, novelty is intrinsically appealing. As a clinician, I often encounter patients requesting the latest medications, even when they are more expensive and not demonstrably better than older ones. As scions of the enlightenment, many of us believe in progress, and that things newer are generally things better. Lesion studies have been around for a century and a half. Any advances made now are likely to be incremental. By contrast, functional imaging is truly a new way to examine the University of Pennsylvania",
"title": ""
}
] |
scidocsrr
|
abe66f029600b23d6f9401a51417505d
|
The Feature Selection and Intrusion Detection Problems
|
[
{
"docid": "2568f7528049b4ffc3d9a8b4f340262b",
"text": "We introduce a new form of linear genetic programming (GP). Two methods of acceleration of our GP approach are discussed: 1) an efficient algorithm that eliminates intron code and 2) a demetic approach to virtually parallelize the system on a single processor. Acceleration of runtime is especially important when operating with complex data sets, because they are occuring in real-world applications. We compare GP performance on medical classification problems from a benchmark database with results obtained by neural networks. Our results show that GP performs comparable in classification and generalization.",
"title": ""
}
] |
[
{
"docid": "7c9cd59a4bb14f678c57ad438f1add12",
"text": "This paper proposes a new ensemble method built upon a deep neural network architecture. We use a set of meteorological models for rain forecast as base predictors. Each meteorological model is provided to a channel of the network and, through a convolution operator, the prediction models are weighted and combined. As a result, the predicted value produced by the ensemble depends on both the spatial neighborhood and the temporal pattern. We conduct some computational experiments in order to compare our approach to other ensemble methods widely used for daily rainfall prediction. The results show that our architecture based on ConvLSTM networks is a strong candidate to solve the problem of combining predictions in a spatiotemporal context.",
"title": ""
},
{
"docid": "bba15d88edc2574dcb3b12a78c3b2d57",
"text": "Gaussian Processes (GPs) are widely used tools in statistics, machine learning, robotics, computer vision, and scientific computation. However, despite their popularity, they can be difficult to apply; all but the simplest classification or regression applications require specification and inference over complex covariance functions that do not admit simple analytical posteriors. This paper shows how to embed Gaussian processes in any higherorder probabilistic programming language, using an idiom based on memoization, and demonstrates its utility by implementing and extending classic and state-of-the-art GP applications. The interface to Gaussian processes, called gpmem, takes an arbitrary real-valued computational process as input and returns a statistical emulator that automatically improve as the original process is invoked and its input-output behavior is recorded. The flexibility of gpmem is illustrated via three applications: (i) Robust GP regression with hierarchical hyper-parameter learning, (ii) discovering symbolic expressions from time-series data by fully Bayesian structure learning over kernels generated by a stochastic grammar, and (iii) a bandit formulation of Bayesian optimization with automatic inference and action selection. All applications share a single 50-line Python library and require fewer than 20 lines of probabilistic code each.",
"title": ""
},
{
"docid": "7ce1646e0fe1bd83f9feb5ec20233c93",
"text": "An emerging class of theories concerning the functional structure of the brain takes the reuse of neural circuitry for various cognitive purposes to be a central organizational principle. According to these theories, it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions. Neural reuse theories thus differ from the usual understanding of the role of neural plasticity (which is, after all, a kind of reuse) in brain organization along the following lines: According to neural reuse, circuits can continue to acquire new uses after an initial or original function is established; the acquisition of new uses need not involve unusual circumstances such as injury or loss of established function; and the acquisition of a new use need not involve (much) local change to circuit structure (e.g., it might involve only the establishment of functional connections to new neural partners). Thus, neural reuse theories offer a distinct perspective on several topics of general interest, such as: the evolution and development of the brain, including (for instance) the evolutionary-developmental pathway supporting primate tool use and human language; the degree of modularity in brain organization; the degree of localization of cognitive function; and the cortical parcellation problem and the prospects (and proper methods to employ) for function to structure mapping. The idea also has some practical implications in the areas of rehabilitative medicine and machine interface design.",
"title": ""
},
{
"docid": "4dfb1fab364811cdd9cd7baa8c9ae0f3",
"text": "Understanding the mechanisms of evolution of brain pathways for complex behaviours is still in its infancy. Making further advances requires a deeper understanding of brain homologies, novelties and analogies. It also requires an understanding of how adaptive genetic modifications lead to restructuring of the brain. Recent advances in genomic and molecular biology techniques applied to brain research have provided exciting insights into how complex behaviours are shaped by selection of novel brain pathways and functions of the nervous system. Here, we review and further develop some insights to a new hypothesis on one mechanism that may contribute to nervous system evolution, in particular by brain pathway duplication. Like gene duplication, we propose that whole brain pathways can duplicate and the duplicated pathway diverge to take on new functions. We suggest that one mechanism of brain pathway duplication could be through gene duplication, although other mechanisms are possible. We focus on brain pathways for vocal learning and spoken language in song-learning birds and humans as example systems. This view presents a new framework for future research in our understanding of brain evolution and novel behavioural traits.",
"title": ""
},
{
"docid": "4073da56cc874ea71f5e8f9c1c376cf8",
"text": "AIM\nThis article reports the results of a study evaluating a preferred music listening intervention for reducing anxiety in older adults with dementia in nursing homes.\n\n\nBACKGROUND\nAnxiety can have a significant negative impact on older adults' functional status, quality of life and health care resources. However, anxiety is often under-diagnosed and inappropriately treated in those with dementia. Little is known about the use of a preferred music listening intervention for managing anxiety in those with dementia.\n\n\nDESIGN\nA quasi-experimental pretest and posttest design was used.\n\n\nMETHODS\nThis study aimed to evaluate the effectiveness of a preferred music listening intervention on anxiety in older adults with dementia in nursing home. Twenty-nine participants in the experimental group received a 30-minute music listening intervention based on personal preferences delivered by trained nursing staff in mid-afternoon, twice a week for six weeks. Meanwhile, 23 participants in the control group only received usual standard care with no music. Anxiety was measured by Rating Anxiety in Dementia at baseline and week six. Analysis of covariance (ancova) was used to determine the effectiveness of a preferred music listening intervention on anxiety at six weeks while controlling for pretest anxiety, age and marital status.\n\n\nRESULTS\nancova results indicated that older adults who received the preferred music listening had a significantly lower anxiety score at six weeks compared with those who received the usual standard care with no music (F = 12.15, p = 0.001).\n\n\nCONCLUSIONS\nPreferred music listening had a positive impact by reducing the level of anxiety in older adults with dementia.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nNursing staff can learn how to implement preferred music intervention to provide appropriate care tailored to the individual needs of older adults with dementia. Preferred music listening is an inexpensive and viable intervention to promote mental health of those with dementia.",
"title": ""
},
{
"docid": "f0bbe4e6d61a808588153c6b5fc843aa",
"text": "The development of Information and Communications Technologies (ICT) has affected various fields including the automotive industry. Therefore, vehicle network protocols such as Controller Area Network (CAN), Local Interconnect Network (LIN), and FlexRay have been introduced. Although CAN is the most widely used for vehicle network protocol, its security issue is not properly addressed. In this paper, we propose a security gateway, an improved version of existing CAN gateways, to protect CAN from spoofing and DoS attacks. We analyze sequence of messages based on the driver’s behavior to resist against spoofing attack and utilize a temporary ID and SipHash algorithm to resist against DoS attack. For the verification of our proposed method, OMNeT++ is used. The suggested method shows high detection rate and low increase of traffic. Also, analysis of frame drop rate during DoS attack shows that our suggested method can defend DoS attack.",
"title": ""
},
{
"docid": "094d027465ac59fda9ae67d62e83782f",
"text": "In this paper, frequency domain techniques are used to derive the tracking properties of the recursive least squares (RLS) algorithm applied to an adaptive antenna array in a mobile fading environment, expanding the use of such frequency domain approaches for nonstationary RLS tracking to the interference canceling problem that characterizes the use of antenna arrays in mobile wireless communications. The analysis focuses on the effect of the exponential weighting of the correlation estimation filter and its effect on the estimations of the time variant autocorrelation matrix and cross-correlation vector. Specifically, the case of a flat Rayleigh fading desired signal applied to an array in the presence of static interferers is considered with an AR2 fading process approximating the Jakes’ fading model. The result is a mean square error (MSE) performance metric parameterized by the fading bandwidth and the RLS exponential weighting factor, allowing optimal parameter selection. The analytic results are verified and demonstrated with a simulation example.",
"title": ""
},
{
"docid": "b14502732b07cfc3153cd419b01084e5",
"text": "Functional logic programming and probabilistic programming have demonstrated the broad benefits of combining laziness (non-strict evaluation with sharing of the results) with non-determinism. Yet these benefits are seldom enjoyed in functional programming, because the existing features for non-strictness, sharing, and non-determinism in functional languages are tricky to combine.\n We present a practical way to write purely functional lazy non-deterministic programs that are efficient and perspicuous. We achieve this goal by embedding the programs into existing languages (such as Haskell, SML, and OCaml) with high-quality implementations, by making choices lazily and representing data with non-deterministic components, by working with custom monadic data types and search strategies, and by providing equational laws for the programmer to reason about their code.",
"title": ""
},
{
"docid": "b677a4762ceb4ec6f9f1fc418a701982",
"text": "NoSQL databases are the new breed of databases developed to overcome the drawbacks of RDBMS. The goal of NoSQL is to provide scalability, availability and meet other requirements of cloud computing. The common motivation of NoSQL design is to meet scalability and fail over. In most of the NoSQL database systems, data is partitioned and replicated across multiple nodes. Inherently, most of them use either Google's MapReduce or Hadoop Distributed File System or Hadoop MapReduce for data collection. Cassandra, HBase and MongoDB are mostly used and they can be termed as the representative of NoSQL world. This tutorial discusses the features of NoSQL databases in the light of CAP theorem.",
"title": ""
},
{
"docid": "d440e08b7f2868459fbb31b94c15db5b",
"text": "Recently, the necessity of hybrid-microgrid system has been proved as a modern power structure. This paper studies a power management system (PMS) in a hybrid network to control the power-flow procedure between DC and AC buses. The proposed architecture for PMS is designed to eliminate the power disturbances and manage the automatic connection among multiple sources. In this paper, PMS benefits from a 3-phase proportional resonance (PR) control ability to accurately adjust the inverter operation. Also, a Photo-Voltaic (PV) unit and a distributed generator (DG) are considered to supply the load demand power. Compared to the previous studies, the applied scheme has sufficient capability of quickly supplying the load in different scenarios with no network failures. The validity of implemented method is verified through the simulation results.",
"title": ""
},
{
"docid": "0c70966c4dbe41458f7ec9692c566c1f",
"text": "By 2012 the U.S. military had increased its investment in research and production of unmanned aerial vehicles (UAVs) from $2.3 billion in 2008 to $4.2 billion [1]. Currently UAVs are used for a wide range of missions such as border surveillance, reconnaissance, transportation and armed attacks. UAVs are presumed to provide their services at any time, be reliable, automated and autonomous. Based on these presumptions, governmental and military leaders expect UAVs to improve national security through surveillance or combat missions. To fulfill their missions, UAVs need to collect and process data. Therefore, UAVs may store a wide range of information from troop movements to environmental data and strategic operations. The amount and kind of information enclosed make UAVs an extremely interesting target for espionage and endangers UAVs of theft, manipulation and attacks. Events such as the loss of an RQ-170 Sentinel to Iranian military forces on 4th December 2011 [2] or the “keylogging” virus that infected an U.S. UAV fleet at Creech Air Force Base in Nevada in September 2011 [3] show that the efforts of the past to identify risks and harden UAVs are insufficient. Due to the increasing governmental and military reliance on UAVs to protect national security, the necessity of a methodical and reliable analysis of the technical vulnerabilities becomes apparent. We investigated recent attacks and developed a scheme for the risk assessment of UAVs based on the provided services and communication infrastructures. We provide a first approach to an UAV specific risk assessment and take into account the factors exposure, communication systems, storage media, sensor systems and fault handling mechanisms. We used this approach to assess the risk of some currently used UAVs: The “MQ-9 Reaper” and the “AR Drone”. A risk analysis of the “RQ-170 Sentinel” is discussed.",
"title": ""
},
{
"docid": "7a3573bfb32dc1e081d43fe9eb35a23b",
"text": "Collections of relational paraphrases have been automatically constructed from large text corpora, as a WordNet counterpart for the realm of binary predicates and their surface forms. However, these resources fall short in their coverage of hypernymy links (subsumptions) among the synsets of phrases. This paper closes this gap by computing a high-quality alignment between the relational phrases of the Patty taxonomy, one of the largest collections of this kind, and the verb senses of WordNet. To this end, we devise judicious features and develop a graph-based alignment algorithm by adapting and extending the SimRank random-walk method. The resulting taxonomy of relational phrases and verb senses, coined HARPY, contains 20,812 synsets organized into a Directed Acyclic Graph (DAG) with 616,792 hypernymy links. Our empirical assessment, indicates that the alignment links between Patty and WordNet have high accuracy, with Mean Reciprocal Rank (MRR) score 0.7 and Normalized Discounted Cumulative Gain (NDCG) score 0.73. As an additional extrinsic value, HARPY provides fine-grained lexical types for the arguments of verb senses in WordNet.",
"title": ""
},
{
"docid": "9e592238813d2bb28629f3dddaba109d",
"text": "Traveling-wave array design techniques are applied to microstrip comb-line antennas in the millimeter-wave band. The simple design procedure is demonstrated. To neglect the effect of reflection waves in the design, a radiating element with a reflection-canceling slit and a stub-integrated radiating element are proposed. Matching performance is also improved.",
"title": ""
},
{
"docid": "3e6aac2e0ff6099aabeee97dc1292531",
"text": "A lthough ordinary least-squares (OLS) regression is one of the most familiar statistical tools, far less has been written − especially in the pedagogical literature − on regression through the origin (RTO). Indeed, the subject is surprisingly controversial. The present note highlights situations in which RTO is appropriate, discusses the implementation and evaluation of such models and compares RTO functions among three popular statistical packages. Some examples gleaned from past Teaching Statistics articles are used as illustrations. For expository convenience, OLS and RTO refer here to linear regressions obtained by least-squares methods with and without a constant term, respectively.",
"title": ""
},
{
"docid": "4cb0d0d6f1823f108a3fc32e0c407605",
"text": "This paper describes a novel method to approximate instantaneous frequency of non-stationary signals through an application of fractional Fourier transform (FRFT). FRFT enables us to build a compact and accurate chirp dictionary for each windowed signal, thus the proposed approach offers improved computational efficiency, and good performance when compared with chirp atom method.",
"title": ""
},
{
"docid": "5a805b6f9e821b7505bccc7b70fdd557",
"text": "There are many factors that influence the translators while translating a text. Amongst these factors is the notion of ideology transmission through the translated texts. This paper is located within the framework of Descriptive Translation Studies (DTS) and Critical Discourse Analysis (CDA). It investigates the notion of ideology with particular use of critical discourse analysis. The purpose is to highlight the relationship between language and ideology in translated texts. It also aims at discovering whether the translator’s socio-cultural and ideology constraints influence the production of his/her translations. As a mixed research method study, the corpus consists of two different Arabic translated versions of the English book “Media Control” by Noam Chomsky. The micro-level contains the qualitative stage where detailed description and comparison -contrastive and comparativeanalysis will be provided. The micro-level analysis should include the lexical items along with the grammatical items (passive verses. active, nominalisation vs. de-nominalisation, moralisation and omission vs. addition). In order to have more reliable and objective data, computed frequencies of the ideological significance occurrences along with percentage and Chi-square formula were conducted through out the data analysis stage which then form the quantitative part of the current study. The main objective of the mentioned data analysis methodologies is to find out the dissimilarity between the proportions of the information obtained from the target texts (TTs) and their equivalent at the source text (ST). The findings indicts that there are significant differences amongst the two TTs in relation to International Journal of Linguistics ISSN 1948-5425 2014, Vol. 6, No. 3 www.macrothink.org/ijl 119 the word choices including the lexical items and the other syntactic structure compared by the ST. These significant differences indicate some ideological transmission through translation process of the two TTs. Therefore, and to some extent, it can be stated that the differences were also influenced by the translators’ socio-cultural and ideological constraints.",
"title": ""
},
{
"docid": "6c730f32b02ca58f66e98f9fc5181484",
"text": "When analyzing a visualized network, users need to explore different sections of the network to gain insight. However, effective exploration of large networks is often a challenge. While various tools are available for users to explore the global and local features of a network, these tools usually require significant interaction activities, such as repetitive navigation actions to follow network nodes and edges. In this paper, we propose a structure-based suggestive exploration approach to support effective exploration of large networks by suggesting appropriate structures upon user request. Encoding nodes with vectorized representations by transforming information of surrounding structures of nodes into a high dimensional space, our approach can identify similar structures within a large network, enable user interaction with multiple similar structures simultaneously, and guide the exploration of unexplored structures. We develop a web-based visual exploration system to incorporate this suggestive exploration approach and compare performances of our approach under different vectorizing methods and networks. We also present the usability and effectiveness of our approach through a controlled user study with two datasets.",
"title": ""
},
{
"docid": "38e9aa4644edcffe87dd5ae497e99bbe",
"text": "Hashtags, created by social network users, have gained a huge popularity in recent years. As a kind of metatag for organizing information, hashtags in online social networks, especially in Instagram, have greatly facilitated users' interactions. In recent years, academia starts to use hashtags to reshape our understandings on how users interact with each other. #like4like is one of the most popular hashtags in Instagram with more than 290 million photos appended with it, when a publisher uses #like4like in one photo, it means that he will like back photos of those who like this photo. Different from other hashtags, #like4like implies an interaction between a photo's publisher and a user who likes this photo, and both of them aim to attract likes in Instagram. In this paper, we study whether #like4like indeed serves the purpose it is created for, i.e., will #like4like provoke more likes? We first perform a general analysis of #like4like with 1.8 million photos collected from Instagram, and discover that its quantity has dramatically increased by 1,300 times from 2012 to 2016. Then, we study whether #like4like will attract likes for photo publishers; results show that it is not #like4like but actually photo contents attract more likes, and the lifespan of a #like4like photo is quite limited. In the end, we study whether users who like #like4like photos will receive likes from #like4like publishers. However, results show that more than 90% of the publishers do not keep their promises, i.e., they will not like back others who like their #like4like photos; and for those who keep their promises, the photos which they like back are often randomly selected.",
"title": ""
},
{
"docid": "c79510daa790e5c92e0c3899cc4a563b",
"text": "Purpose – The purpose of this study is to interpret consumers’ emotion in their consumption experience in the context of mobile commerce from an experiential view. The study seeks to address concerns about the experiential aspects of mobile commerce regardless of the consumption type. For the purpose, the authors aims to propose a stimulus-organism-response (S-O-R) based model that incorporates both utilitarian and hedonic factors of consumers. Design/methodology/approach – A survey study was conducted to collect data from 293 mobile phone users. The questionnaire was administered in study classrooms, a library, or via e-mail. The measurement model and structural model were examined using LISREL 8.7. Findings – The results of this research implied that emotion played a significant role in the mobile consumption experience; hedonic factors had a positive effect on the consumption experience, while utilitarian factors had a negative effect on the consumption experience of consumers. The empirical findings also indicated that media richness was as important as subjective norms, and more important than convenience and self-efficacy. Originality/value – Few m-commerce studies have focused directly on the experiential aspects of consumption, including the hedonic experience and positive emotions among mobile device users. Applying the stimulus-organism-response (S-O-R) framework from the perspective of the experiential view, the current research model is developed to examine several utilitarian and hedonic factors in the context of the consumption experience, and indicates a comparison between the information processing (utilitarian) view and the experiential (hedonic) view of consumer behavior. It illustrates the relationships among six variables (i.e. convenience, media richness, subjective norms, self-efficacy, emotion, and consumption experience) in a mobile commerce context.",
"title": ""
},
{
"docid": "d9bd23208ab6eb8688afea408a4c9eba",
"text": "A novel ultra-wideband (UWB) bandpass filter with 5 to 6 GHz rejection band is proposed. The multiple coupled line structure is incorporated with multiple-mode resonator (MMR) to provide wide transmission band and enhance out-of band performance. To inhibit the signals ranged from 5- to 6-GHz, four stepped-impedance open stubs are implemented on the MMR without increasing the size of the proposed filter. The design of the proposed UWB filter has two transmission bands. The first passband from 2.8 GHz to 5 GHz has less than 2 dB insertion loss and greater than 18 dB return loss. The second passband within 6 GHz and 10.6 GHz has less than 1.5 dB insertion loss and greater than 15 dB return loss. The rejection at 5.5 GHz is better than 50 dB. This filter can be integrated in UWB radio systems and efficiently enhance the interference immunity from WLAN.",
"title": ""
}
] |
scidocsrr
|
5be2282ba8497d916872da7889f69e18
|
A Decade Bandwidth 90 W GaN HEMT Push-Pull Power Amplifier for VHF / UHF Applications
|
[
{
"docid": "fb03575e346b527560f42db53b5b736e",
"text": "We have demonstrated a RLC matched GaN HEMT power amplifier with 12 dB gain, 0.05-2.0 GHz bandwidth, 8 W CW output power and 36.7-65.4% drain efficiency over the band. The amplifier is packaged in a ceramic S08 package and contains a GaN on SiC device operating at 28 V drain voltage, alongside GaAs integrated passive matching circuitry. A second circuit designed for 48 V operation and 15 W CW power over the same band, obtains over 20 W under pulsed condition with 10% duty cycle and 100 mus pulse width. CW measurements are pending after assembly in an alternate high power package. These amplifiers are suitable for use in wideband digital cellular infrastructure, handheld radios, and jamming applications.",
"title": ""
}
] |
[
{
"docid": "739669a06f0fbe94f5c21e1b0b514345",
"text": "This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed based on a re-formulated atmospheric scattering model. Instead of estimating the transmission matrix and the atmospheric light separately as most previous models did, AOD-Net directly generates the clean image through a light-weight CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other deep models, e.g., Faster R-CNN, for improving high-level tasks on hazy images. Experimental results on both synthesized and natural hazy image datasets demonstrate our superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality. Furthermore, when concatenating AOD-Net with Faster R-CNN, we witness a large improvement of the object detection performance on hazy images.",
"title": ""
},
{
"docid": "4c5dd43f350955b283f1a04ddab52d41",
"text": "This thesis deals with interaction design for a class of upcoming computer technologies for human use characterized by being different from traditional desktop computers in their physical appearance and the contexts in which they are used. These are typically referred to as emerging technologies. Emerging technologies often imply interaction dissimilar from how computers are usually operated. This challenges the scope and applicability of existing knowledge about human-computer interaction design. The thesis focuses on three specific technologies: virtual reality, augmented reality and mobile computer systems. For these technologies, five themes are addressed: current focus of research, concepts, interaction styles, methods and tools. These themes inform three research questions, which guide the conducted research. The thesis consists of five published research papers and a summary. In the summary, current focus of research is addressed from the perspective of research methods and research purpose. Furthermore, the notions of human-computer interaction design and emerging technologies are discussed and two central distinctions are introduced. Firstly, interaction design is divided into two categories with focus on systems and processes respectively. Secondly, the three studied emerging technologies are viewed in relation to immersion into virtual space and mobility in physical space. These distinctions are used to relate the five paper contributions, each addressing one of the three studied technologies with focus on properties of systems or the process of creating them respectively. Three empirical sources contribute to the results. Experiments with interaction design inform the development of concepts and interaction styles suitable for virtual reality, augmented reality and mobile computer systems. Experiments with designing interaction inform understanding of how methods and tools support design processes for these technologies. Finally, a literature survey informs a review of existing research, and identifies current focus, limitations and opportunities for future research. The primary results of the thesis are: 1) Current research within human-computer interaction design for the studied emerging technologies focuses on building systems ad-hoc and evaluating them in artificial settings. This limits the generation of cumulative theoretical knowledge. 2) Interaction design for the emerging technologies studied requires the development of new suitable concepts and interaction styles. Suitable concepts describe unique properties and challenges of a technology. Suitable interaction styles respond to these challenges by exploiting the technology’s unique properties. 3) Designing interaction for the studied emerging technologies involves new use situations, a distance between development and target platforms and complex programming. Elements of methods exist, which are useful for supporting the design of interaction, but they are fragmented and do not support the process as a whole. The studied tools do not support the design process as a whole either but support aspects of interaction design by bridging the gulf between development and target platforms and providing advanced programming environments. Menneske-maskine interaktionsdesign for opkommende teknologier Virtual Reality, Augmented Reality og Mobile Computersystemer",
"title": ""
},
{
"docid": "30a8cc4c36c6e760e01f87a6cfcfdd44",
"text": "Trust is an essential factor in ensuring robust human-robot interaction. However, recent work suggests that people can be too trusting of the technology with which they interact during emergencies, causing potential harm to themselves. To test whether this \"over-trust\" also extends to normal day-to-day activities, such as driving a car, we carried out a series of experiments with an autonomous car simulator. Participants (N=73) engaged in a scenario with no, correct or false audible information regarding the state of traffic around the self-driving vehicle, and were told they could assume control at any point in the interaction. Results show that participants trust the autonomous system, even when they should not, leading to potential dangerous situations.",
"title": ""
},
{
"docid": "d2abcdcdb6650c30838507ec1521b263",
"text": "Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition. However, recent research showed that DNNs can be highly vulnerable to adversarially generated instances, which look seemingly normal to human observers, but completely confuse DNNs. These adversarial samples are crafted by adding small perturbations to normal, benign images. Such perturbations, while imperceptible to the human eye, are picked up by DNNs and cause them to misclassify the manipulated instances with high confidence. In this work, we explore and demonstrate how systematic JPEG compression can work as an effective pre-processing step in the classification pipeline to counter adversarial attacks and dramatically reduce their effects (e.g., Fast Gradient Sign Method, DeepFool). An important component of JPEG compression is its ability to remove high frequency signal components, inside square blocks of an image. Such an operation is equivalent to selective blurring of the image, helping remove additive perturbations. Further, we propose an ensemble-based technique that can be constructed quickly from a given well-performing DNN, and empirically show how such an ensemble that leverages JPEG compression can protect a model from multiple types of adversarial attacks, without requiring knowledge about the model.",
"title": ""
},
{
"docid": "ff9ca485a07dca02434396eca0f0c94f",
"text": "Clustering is a NP-hard problem that is used to find the relationship between patterns in a given set of patterns. It is an unsupervised technique that is applied to obtain the optimal cluster centers, especially in partitioned based clustering algorithms. On the other hand, cat swarm optimization (CSO) is a new metaheuristic algorithm that has been applied to solve various optimization problems and it provides better results in comparison to other similar types of algorithms. However, this algorithm suffers from diversity and local optima problems. To overcome these problems, we are proposing an improved version of the CSO algorithm by using opposition-based learning and the Cauchy mutation operator. We applied the opposition-based learning method to enhance the diversity of the CSO algorithm and we used the Cauchy mutation operator to prevent the CSO algorithm from trapping in local optima. The performance of our proposed algorithm was tested with several artificial and real datasets and compared with existing methods like K-means, particle swarm optimization, and CSO. The experimental results show the applicability of our proposed method.",
"title": ""
},
{
"docid": "db87b17e0fd3310fd462c725a5462e6a",
"text": "We present Selections, a new cryptographic voting protocol that is end-to-end verifiable and suitable for Internet voting. After a one-time in-person registration, voters can cast ballots in an arbitrary number of elections. We say a system provides over-the-shoulder coercionresistance if a voter can undetectably avoid complying with an adversary that is present during the vote casting process. Our system is the first in the literature to offer this property without the voter having to anticipate coercion and precompute values. Instead, a voter can employ a panic password. We prove that Selections is coercion-resistant against a non-adaptive adversary. 1 Introductory Remarks From a security perspective, the use of electronic voting machines in elections around the world continues to be concerning. In principle, many security issues can be allayed with cryptography. While cryptographic voting has not seen wide deployment, refined systems like Prêt à Voter [11,28] and Scantegrity II [9] are representative of what is theoretically possible, and have even seen some use in governmental elections [7]. Today, a share of the skepticism over electronic elections is being apportioned to Internet voting.1 Many nation-states are considering, piloting or using Internet voting in elections. In addition to the challenges of verifiability and ballot secrecy present in any voting system, Internet voting adds two additional constraints: • Untrusted platforms: voters should be able to reliably cast secret ballots, even when their devices may leak information or do not function correctly. • Unsupervised voting: coercers or vote buyers should not be able to exert undue influence over voters despite the open environment of Internet voting. As with electronic voting, cryptography can assist in addressing these issues. The study of cryptographic Internet voting is not as mature. Most of the literature concentrates on only one of the two problems (see related work in Section 1.2). In this paper, we are concerned with the unsupervised voting problem. Informally, a system that solves it is said to be coercion-resistant. Full version available: http://eprint.iacr.org/2011/166 1 One noted cryptographer, Ronald Rivest, infamously opined that “best practices for Internet voting are like best practices for drunk driving” [25]. G. Danezis (Ed.): FC 2011, LNCS 7035, pp. 47–61, 2012. c © Springer-Verlag Berlin Heidelberg 2012 48 J. Clark and U. Hengartner",
"title": ""
},
{
"docid": "0b50ec58f82b7ac4ad50eb90425b3aea",
"text": "OBJECTIVES\nThe study aimed (1) to examine if there are equivalent results in terms of union, alignment and elbow functionally comparing single- to dual-column plating of AO/OTA 13A2 and A3 distal humeral fractures and (2) if there are more implant-related complications in patients managed with bicolumnar plating compared to single-column plate fixation.\n\n\nDESIGN\nThis was a multi-centred retrospective comparative study.\n\n\nSETTING\nThe study was conducted at two academic level 1 trauma centres.\n\n\nPATIENTS/PARTICIPANTS\nA total of 105 patients were identified to have surgical management of extra-articular distal humeral fractures Arbeitsgemeinschaft für Osteosynthesefragen/Orthopaedic Trauma Association (AO/OTA) 13A2 and AO/OTA 13A3).\n\n\nINTERVENTION\nPatients were treated with traditional dual-column plating or a single-column posterolateral small-fragment pre-contoured locking plate used as a neutralisation device with at least five screws in the short distal segment.\n\n\nMAIN OUTCOME MEASUREMENTS\nThe patients' elbow functionality was assessed in terms of range of motion, union and alignment. In addition, the rate of complications between the groups including radial nerve palsy, implant-related complications (painful prominence and/or ulnar nerve neuritis) and elbow stiffness were compared.\n\n\nRESULTS\nPatients treated with single-column plating had similar union rates and alignment. However, single-column plating resulted in a significantly better range of motion with less complications.\n\n\nCONCLUSIONS\nThe current study suggests that exposure/instrumentation of only the lateral column is a reliable and preferred technique. This technique allows for comparable union rates and alignment with increased elbow functionality and decreased number of complications.",
"title": ""
},
{
"docid": "017b364e58390f00aab3b79b034ee6dc",
"text": "Pervasive applications rely on data captured from the physical world through sensor devices. Data provided by these devices, however, tend to be unreliable. The data must, therefore, be cleaned before an application can make use of them, leading to additional complexity for application development and deployment. Here we present Extensible Sensor stream Processing (ESP), a framework for building sensor data cleaning infrastructures for use in pervasive applications. ESP is designed as a pipeline using declarative cleaning mechanisms based on spatial and temporal characteristics of sensor data. We demonstrate ESP’s effectiveness and ease of use through three real-world scenarios.",
"title": ""
},
{
"docid": "934bdd758626ec37241cffba8e2cbeb9",
"text": "The combination of GPS/INS provides an ideal navigation system of full capability of continuously outputting position, velocity, and attitude of the host platform. However, the accuracy of INS degrades with time when GPS signals are blocked in environments such as tunnels, dense urban canyons and indoors. To dampen down the error growth, the INS sensor errors should be properly estimated and compensated before the inertial data are involved in the navigation computation. Therefore appropriate modelling of the INS sensor errors is a necessity. Allan Variance (AV) is a simple and efficient method for verifying and modelling these errors by representing the root mean square (RMS) random drift error as a function of averaging time. The AV can be used to determine the characteristics of different random processes. This paper applies the AV to analyse and model different types of random errors residing in the measurements of MEMS inertial sensors. The derived error model will be further applied to a low-cost GPS/MEMS-INS system once the correctness of the model is verified. The paper gives the detail of the AV analysis as well as presents the test results.",
"title": ""
},
{
"docid": "c13ef40a8283f4c0aa6d61c32c6b1a79",
"text": "Fingerprint individuality is the study of the extent of uniqueness of fingerprints and is the central premise of expert testimony in court. A forensic expert testifies whether a pair of fingerprints is either a match or non-match by comparing salient features of the fingerprint pair. However, the experts are rarely questioned on the uncertainty associated with the match: How likely is the observed match between the fingerprint pair due to just random chance? The main concern with the admissibility of fingerprint evidence is that the matching error rates (i.e., the fundamental error rates of matching by the human expert) are unknown. The problem of unknown error rates is also prevalent in other modes of identification such as handwriting, lie detection, etc. Realizing this, the U.S. Supreme Court, in the 1993 case of Daubert vs. Merrell Dow Pharmaceuticals, ruled that forensic evidence presented in a court is subject to five principles of scientific validation, namely whether (i) the particular technique or methodology has been subject to statistical hypothesis testing, (ii) its error rates has been established, (iii) standards controlling the technique’s operation exist and have been maintained, (iv) it has been peer reviewed, and (v) it has a general widespread acceptance. Following Daubert, forensic evidence based on fingerprints was first challenged in the 1999 case of USA vs. Byron Mitchell based on the “known error rate” condition 2 mentioned above, and subsequently, in 20 other cases involving fingerprint evidence. The establishment of matching error rates is directly related to the extent of fingerprint individualization. This article gives an overview of the problem of fingerprint individuality, the challenges faced and the models and methods that have been developed to study this problem. Related entries: Fingerprint individuality, fingerprint matching automatic, fingerprint matching manual, forensic evidence of fingerprint, individuality. Definitional entries: 1.Genuine match: This is the match between two fingerprint images of the same person. 2. Impostor match: This is the match between a pair of fingerprints from two different persons. 3. Fingerprint individuality: It is the study of the extent of which different fingerprints tend to match with each other. It is the most important measure to be judged when fingerprint evidence is presented in court as it reflects the uncertainty with the experts’ decision. 4. Variability: It refers to the differences in the observed features from one sample to another in a population. The differences can be random, that is, just by chance, or systematic due to some underlying factor that governs the variability.",
"title": ""
},
{
"docid": "7ad76f9f584b33ffd85b8e5c3bf50e92",
"text": "Deep residual learning (ResNet) (He et al., 2016) is a new method for training very deep neural networks using identity mapping for shortcut connections. ResNet has won the ImageNet ILSVRC 2015 classification task, and achieved state-of-theart performances in many computer vision tasks. However, the effect of residual learning on noisy natural language processing tasks is still not well understood. In this paper, we design a novel convolutional neural network (CNN) with residual learning, and investigate its impacts on the task of distantly supervised noisy relation extraction. In contradictory to popular beliefs that ResNet only works well for very deep networks, we found that even with 9 layers of CNNs, using identity mapping could significantly improve the performance for distantly-supervised relation extraction.",
"title": ""
},
{
"docid": "43233e45f07b80b8367ac1561356888d",
"text": "Current Zero-Shot Learning (ZSL) approaches are restricted to recognition of a single dominant unseen object category in a test image. We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the ‘recognition’ and ‘localization’ of an unseen category. To address this limitation, we introduce a new ‘Zero-Shot Detection’ (ZSD) problem setting, which aims at simultaneously recognizing and locating object instances belonging to novel categories without any training examples. We also propose a new experimental protocol for ZSD based on the highly challenging ILSVRC dataset, adhering to practical issues, e.g., the rarity of unseen objects. To the best of our knowledge, this is the first end-to-end deep network for ZSD that jointly models the interplay between visual and semantic domain information. To overcome the noise in the automatically derived semantic descriptions, we utilize the concept of meta-classes to design an original loss function that achieves synergy between max-margin class separation and semantic space clustering. Furthermore, we present a baseline approach extended from recognition to detection setting. Our extensive experiments show significant performance boost over the baseline on the imperative yet difficult ZSD problem.",
"title": ""
},
{
"docid": "1ac1f9f1813d25c0d7a9c542f6568e35",
"text": "BACKGROUND\nThe Crohn's Disease Activity Index (CDAI) is a measure of disease activity based on symptoms, signs and a laboratory test. The US Food and Drug Administration has indicated that patient reported outcomes (PROs) should be the primary outcome in randomised controlled trials for Crohn's disease (CD).\n\n\nAIM\nAs no validated PRO exists for CD, to investigate whether CDAI diary card items could be modified for this purpose.\n\n\nMETHODS\nData from a trial of rifaximin-extended intestinal release were used to identify cut-points for stool frequency, pain and general well-being using receiver operating characteristic curves with CDAI <150 as criterion. The operating properties of 2- and 3-item PRO were evaluated using data from a trial of methotrexate in CD. Regression analysis determined PRO2 and PRO3 scores that correspond to CDAI-defined thresholds of 150, 220 and 450 and changes of 50, 70 and 100 points.\n\n\nRESULTS\nOptimum cut-points for CDAI remission were mean daily stool frequency ≤1.5, abdominal pain ≤1, and general well-being score of ≤1 (areas under the ROC curve 0.79, 0.91 and 0.89, respectively). The effect estimates were similar using 2- and 3-item PROs or CDAI. PRO2 and PRO3 values corresponding to CDAI scores of 150, 220 and 450 points were 8, 14, 34 and 13, 22, 53. The corresponding values for CDAI changes of 50, 70 and 100, were 2, 5, 8 and 5, 9, 14. Responsiveness to change was similar for both PROs.\n\n\nCONCLUSION\nPatient reported outcomes derived from CDAI diary items may be appropriate for use in clinical trials for CD.",
"title": ""
},
{
"docid": "84a7ef0d27649619119892c6c91cf63c",
"text": "As the most-studied form of leadership across disciplines in both Western and Chinese contexts, transformational school leadership has the potential to suit diverse national and cultural contexts. Given the growing evidence showing the positive effects of transformational leadership on various school outcomes as it relates to school environment, teacher and student achievement, we wanted to explore the factors that gave rise to transformational leadership. The purpose of this study was to identify and compare the antecedents fostering transformational leadership in the contexts of both the United States and China. This paper reviews and discusses the empirical studies of the last two decades, concentrating on the variables that are antecedent to transformational leadership mainly in the educational context, but also in public management, business and psychology. Results show that transformational leadership is related to three sets of antecedents, which include: (1) the leader’s qualities (e.g., self-efficacy, values, traits, emotional intelligence); (2) organizational features (e.g., organization fairness); and (3) the leader’s colleagues’ characteristics (e.g., follower’s initial developmental level). Some antecedents were common to both contexts, while other antecedents appeared to be national context specific. The implications of the findings for future research and leader preparation in different national contexts are discussed.",
"title": ""
},
{
"docid": "7a3aaec6e397b416619bcde0c565b0f6",
"text": "This paper gives an overview of automatic speaker recognition technology, with an emphasis on text-independent recognition. Speaker recognition has been studied actively for several decades. We give an overview of both the classical and the state-of-the-art methods. We start with the fundamentals of automatic speaker recognition, concerning feature extraction and speaker modeling. We elaborate advanced computational techniques to address robustness and session variability. The recent progress from vectors towards supervectors opens up a new area of exploration and represents a technology trend. We also provide an overview of this recent development and discuss the evaluation methodology of speaker recognition systems. We conclude the paper with discussion on future directions. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dde2211bd3e9cceb20cce63d670ebc4c",
"text": "This paper presents the design of a 60 GHz phase shifter integrated with a low-noise amplifier (LNA) and power amplifier (PA) in a 65 nm CMOS technology for phased array systems. The 4-bit digitally controlled RF phase shifter is based on programmable weighted combinations of I/Q paths using digitally controlled variable gain amplifiers (VGAs). With the combination of an LNA, a phase shifter and part of a combiner, each receiver path achieves 7.2 dB noise figure, a 360° phase shift range in steps of approximately 22.5°, an average insertion gain of 12 dB at 61 GHz, a 3 dB-bandwidth of 5.5 GHz and dissipates 78 mW. Consisting of a phase shifter and a PA, one transmitter path achieves a maximum output power of higher than +8.3 dBm, a 360° phase shift range in 22.5° steps, an average insertion gain of 7.7 dB at 62 GHz, a 3 dB-bandwidth of 6.5 GHz and dissipates 168 mW.",
"title": ""
},
{
"docid": "02dcb2f02432f739e1d7d3c4f9ae36f0",
"text": "Sentiment analysis has played a primary role in text classification. It is an undoubted fact that some years ago, textual information was spreading in manageable rates; however, nowadays, such information has overcome even the most ambiguous expectations and constantly grows within seconds. It is therefore quite complex to cope with the vast amount of textual data particularly if we also take the incremental production speed into account. Social media, e-commerce, news articles, comments and opinions are broadcasted on a daily basis. A rational solution, in order to handle the abundance of data, would be to build automated information processing systems, for analyzing and extracting meaningful patterns from text. The present paper focuses on sentiment analysis applied in Greek texts. Thus far, there is no wide availability of natural language processing tools for Modern Greek. Hence, a thorough analysis of Greek, from the lexical to the syntactical level, is difficult to perform. This paper attempts a different approach, based on the proven capabilities of gradient boosting, a well-known technique for dealing with high-dimensional data. The main rationale is that since English has dominated the area of preprocessing tools and there are also quite reliable translation services, we could exploit them to transform Greek tokens into English, thus assuring the precision of the translation, since the translation of large texts is not always reliable and meaningful. The new feature set of English tokens is augmented with the original set of Greek, consequently producing a high dimensional dataset that poses certain difficulties for any traditional classifier. Accordingly, we apply gradient boosting machines, an ensemble algorithm that can learn with different loss functions providing the ability to work efficiently with high dimensional data. Moreover, for the task at hand, we deal with a class imbalance issues since the distribution of sentiments in real-world applications often displays issues of inequality. For example, in political forums or electronic discussions about immigration or religion, negative comments overwhelm the positive ones. The class imbalance problem was confronted using a hybrid technique that performs a variation of under-sampling the majority class and over-sampling the minority class, respectively. Experimental results, considering different settings, such as translation of tokens against translation of sentences, consideration of limited Greek text preprocessing and omission of the translation phase, demonstrated that the proposed gradient boosting framework can effectively cope with both high-dimensional and imbalanced datasets and performs significantly better than a plethora of traditional machine learning classification approaches in terms of precision and recall measures.",
"title": ""
},
{
"docid": "621840a3c2637841b9da1e74c99e98f1",
"text": "Topic modeling is a type of statistical model for discovering the latent “topics” that occur in a collection of documents through machine learning. Currently, latent Dirichlet allocation (LDA) is a popular and common modeling approach. In this paper, we investigate methods, including LDA and its extensions, for separating a set of scientific publications into several clusters. To evaluate the results, we generate a collection of documents that contain academic papers from several different fields and see whether papers in the same field will be clustered together. We explore potential scientometric applications of such text analysis capabilities.",
"title": ""
},
{
"docid": "570e48e839bd2250473d4332adf2b53f",
"text": "Autologous stem cell transplant can be a curative therapy to restore normal hematopoiesis after myeloablative treatments in patients with malignancies. Aim: To evaluate the effect of rehabilitation program for caregivers about patients’ post autologous bone marrow transplantation Research Design: A quasi-experimental design was used. Setting: The study was conducted in Sheikh Zayed Specialized Hospital at Oncology Outpatient Clinic of Bone Marrow Transplantation Unit. Sample: A purposive sample comprised; a total number of 60 patients, their age ranged from 21 to 50 years, free from any other chronic disease and the caregivers are living with the patients in the same home. Tools: Two tools were used for data collection. First tool: An interviewing autologous bone marrow transplantation questionnaire for the patients and their caregivers was divided into five parts; Including: Socio-demographic data, knowledge of caregivers regarding autologous bone marrow transplant and side effect of chemotherapy, family caregivers’ practices according to their providing care related to post bone marrow transplantation, signs and symptoms, activities of daily living for patients and home environmental sanitation for the patients. Second tool: deals with physical examination assessment of the patients from head to toe. Results: 61.7% of patients aged 30˂40 years, and 68.3 % were female. Regarding the type of relationship with the patients, 48.3% were the mother, 58.3% of patients who underwent autologous bone marrow transplantation had a sanitary environment and there were highly statistically significant differences between caregivers’ knowledge and practices pre/post program. Conclusion: There were highly statistically significant differences between family caregivers' total knowledge, their practices, as well as their total caregivers’ knowledge, practices and patients’ independency level pre/post rehabilitation program. . Recommendations: Counseling for family caregivers of patients who underwent autologous bone marrow transplantation and carrying out rehabilitation program for the patients and their caregivers to be performed properly during the rehabilitation period at caner hospitals such as 57357 Hospital and The National Cancer Institute in Cairo.",
"title": ""
},
{
"docid": "9adaeac8cedd4f6394bc380cb0abba6e",
"text": "The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, \"cocktail-party\" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the \"cocktail party problem\".",
"title": ""
}
] |
scidocsrr
|
8f60475750343d2b3f5c0df19c4095b1
|
DART: Dense Articulated Real-Time Tracking
|
[
{
"docid": "8b0a90d4f31caffb997aced79c59e50c",
"text": "Visual SLAM systems aim to estimate the motion of a moving camera together with the geometric structure and appearance of the world being observed. To the extent that this is possible using only an image stream, the core problem that must be solved by any practical visual SLAM system is that of obtaining correspondence throughout the images captured. Modern visual SLAM pipelines commonly obtain correspondence by using sparse feature matching techniques and construct maps using a composition of point, line or other simple geometric primitives. The resulting sparse feature map representations provide sparsely furnished, incomplete reconstructions of the observed scene. Related techniques from multiple view stereo (MVS) achieve high quality dense reconstruction by obtaining dense correspondences over calibrated image sequences. Despite the usefulness of the resulting dense models, these techniques have been of limited use in visual SLAM systems. The computational complexity of estimating dense surface geometry has been a practical barrier to its use in real-time SLAM. Furthermore, MVS algorithms have typically required a fixed length, calibrated image sequence to be available throughout the optimisation — a condition fundamentally at odds with the online nature of SLAM. With the availability of massively-parallel commodity computing hardware, we demonstrate new algorithms that achieve high quality incremental dense reconstruction within online visual SLAM. The result is a live dense reconstruction (LDR) of scenes that makes possible numerous applications that can utilise online surface modelling, for instance: planning robot interactions with unknown objects, augmented reality with characters that interact with the scene, or providing enhanced data for object recognition. The core of this thesis goes beyond LDR to demonstrate fully dense visual SLAM. We replace the sparse feature map representation with an incrementally updated, non-parametric, dense surface model. By enabling real-time dense depth map estimation through novel short baseline MVS, we can continuously update the scene model and further leverage its predictive capabilities to achieve robust camera pose estimation with direct whole image alignment. We demonstrate the capabilities of dense visual SLAM using a single moving passive camera, and also when real-time surface measurements are provided by a commodity depth camera. The results demonstrate state-of-the-art, pick-up-and-play 3D reconstruction and camera tracking systems useful in many real world scenarios. Acknowledgements There are key individuals who have provided me with all the support and tools that a student who sets out on an adventure could want. Here, I wish to acknowledge those friends and colleagues, that by providing technical advice or much needed fortitude, helped bring this work to life. Prof. Andrew Davison’s robot vision lab provides a unique research experience amongst computer vision labs in the world. First and foremost, I thank my supervisor Andy for giving me the chance to be part of that experience. His brilliant guidance and support of my growth as a researcher are well matched by his enthusiasm for my work. This is made most clear by his fostering the joy of giving live demonstrations of work in progress. His complete faith in my ability drove me on and gave me license to develop new ideas and build bridges to research areas that we knew little about. Under his guidance I’ve been given every possible opportunity to develop my research interests, and this thesis would not be possible without him. My appreciation for Prof. Murray Shanahan’s insights and spirit began with our first conversation. Like ripples from a stone cast into a pond, the presence of his ideas and depth of knowledge instantly propagated through my mind. His enthusiasm and capacity to discuss any topic, old or new to him, and his ability to bring ideas together across the worlds of science and philosophy, showed me an openness to thought that I continue to try to emulate. I am grateful to Murray for securing a generous scholarship for me in the Department of Computing and for providing a home away from home in his cognitive robotics lab. I am indebted to Prof. Owen Holland who introduced me to the world of research at the University of Essex. Owen showed me a first glimpse of the breadth of ideas in robotics, AI, cognition and beyond. I thank Owen for introducing me to the idea of continuing in academia for a doctoral degree and for introducing me to Murray. I have learned much with many friends and colleagues at Imperial College, but there are three who have been instrumental. I thank Steven Lovegrove, Ankur Handa and Renato Salas-Moreno who travelled with me on countless trips into the unknown, sometimes to chase a small concept but more often than not in pursuit of the bigger picture we all wanted to see. They indulged me with months of exploration, collaboration and fun, leading to us understand ideas and techniques that were once out of reach. Together, we were able to learn much more. Thank you Hauke Strasdatt, Luis Pizarro, Jan Jachnick, Andreas Fidjeland and members of the robot vision and cognitive robotics labs for brilliant discussions and for sharing the",
"title": ""
}
] |
[
{
"docid": "1481340177c3fcdc124c299cc87848cb",
"text": "The grouping genetic algorithm (GGA) is a genetic algorithm heavily modified to suit the structure of grouping problems. Those are the problems where the aim is to find a good partition of a set or to group together the members of the set. The bin packing problem (BPP) is a well known NP-hard grouping problem: items of various sires have to be grouped inside bins of tixed capacity. On the other hand, the reduction method of Martello and Toth, based on their dominance criterion, constitutes one of the best OR techniques for optimization of the BPP to date. In this article, we first describe the GGA paradigm as compared to me classic Holland-style GA and the ordering GA. We then show how the bin packing GGA can be enhanced with a local optimization inspired by the dominance criterion. An extensive experimental comparison shows that the combination yields an algorithm superior to either of its components. Key urords: grouping, partitioning, bin packing, genetic algorithm, solution encoding, dominance, reduction",
"title": ""
},
{
"docid": "cf3354d0a85ea1fa2431057bdf6b6d0f",
"text": "Increasingly, scientific computing applications must accumulate and manage massive datasets, as well as perform sophisticated computations over these data. Such applications call for data-intensive scalable computer (DISC) systems, which differ in fundamental ways from existing high-performance computing systems.",
"title": ""
},
{
"docid": "93c928adef35a409acaa9b371a1498f3",
"text": "The acquisition of a new motor skill is characterized first by a short-term, fast learning stage in which performance improves rapidly, and subsequently by a long-term, slower learning stage in which additional performance gains are incremental. Previous functional imaging studies have suggested that distinct brain networks mediate these two stages of learning, but direct comparisons using the same task have not been performed. Here we used a task in which subjects learn to track a continuous 8-s sequence demanding variable isometric force development between the fingers and thumb of the dominant, right hand. Learning-associated changes in brain activation were characterized using functional MRI (fMRI) during short-term learning of a novel sequence, during short-term learning after prior, brief exposure to the sequence, and over long-term (3 wk) training in the task. Short-term learning was associated with decreases in activity in the dorsolateral prefrontal, anterior cingulate, posterior parietal, primary motor, and cerebellar cortex, and with increased activation in the right cerebellar dentate nucleus, the left putamen, and left thalamus. Prefrontal, parietal, and cerebellar cortical changes were not apparent with short-term learning after prior exposure to the sequence. With long-term learning, increases in activity were found in the left primary somatosensory and motor cortex and in the right putamen. Our observations extend previous work suggesting that distinguishable networks are recruited during the different phases of motor learning. While short-term motor skill learning seems associated primarily with activation in a cortical network specific for the learned movements, long-term learning involves increased activation of a bihemispheric cortical-subcortical network in a pattern suggesting \"plastic\" development of new representations for both motor output and somatosensory afferent information.",
"title": ""
},
{
"docid": "cdf4f5074ec86db3948df3497f9896ec",
"text": "This paper investigates algorithms to automatically adapt the learning rate of neural networks (NNs). Starting with stochastic gradient descent, a large variety of learning methods has been proposed for the NN setting. However, these methods are usually sensitive to the initial learning rate which has to be chosen by the experimenter. We investigate several features and show how an adaptive controller can adjust the learning rate without prior knowledge of the learning problem at hand. Introduction Due to the recent successes of Neural Networks for tasks such as image classification (Krizhevsky, Sutskever, and Hinton 2012) and speech recognition (Hinton et al. 2012), the underlying gradient descent methods used for training have gained a renewed interest by the research community. Adding to the well known stochastic gradient descent and RMSprop methods (Tieleman and Hinton 2012), several new gradient based methods such as Adagrad (Duchi, Hazan, and Singer 2011) or Adadelta (Zeiler 2012) have been proposed. However, most of the proposed methods rely heavily on a good choice of an initial learning rate. Compounding this issue is the fact that the range of good learning rates for one problem is often small compared to the range of good learning rates across different problems, i.e., even an experienced experimenter often has to manually search for good problem-specific learning rates. A tempting alternative to manually searching for a good learning rate would be to learn a control policy that automatically adjusts the learning rate without further intervention using, for example, reinforcement learning techniques (Sutton and Barto 1998). Unfortunately, the success of learning such a controller from data is likely to depend heavily on the features made available to the learning algorithm. A wide array of reinforcement learning literature has shown the importance of good features in tasks ranging from Tetris (Thiery and Scherrer 2009) to haptile object identification (Kroemer, Lampert, and Peters 2011). Thus, the first step towards applying RL methods to control learning rates is to find good features. Subsequently, the main contributions of this paper are Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. • Identifying informative features for the automatic control of the learning rate. • Proposing a learning setup for a controller that automatically adapts the step size of NN training algorithms. • Showing that the resulting controller generalizes across different tasks and architectures. Together, these contributions enable robust and efficient training of NNs without the need of manual step size tuning. Method The goal of this paper is to develop an adaptive controller for the learning rate used in training algorithms such as Stochastic Gradient Descent (SGD) or RMSprop (Tieleman and Hinton 2012). We start with a general statement of the problem we are aiming to solve. Problem Statement We are interested in finding the minimizer ω∗ = arg min ω F (X;ω), (1) where in our case ω represents the weight vector of the NN and X = {x1, . . . ,xN} is the set of N training examples (e.g., images and labels). The function F (·) sums over the function values induced by the individual inputs such that",
"title": ""
},
{
"docid": "33187aba3285bcd040c45edf2eba284e",
"text": "This paper describes the acquisition of the multichannel multimodal database AV@CAR for automatic audio-visual speech recognition in cars. Automatic speech recognition (ASR) plays an important role inside vehicles to keep the driver away from distraction. It is also known that visual information (lip-reading) can improve accuracy in ASR under adverse conditions as those within a car. The corpus described here is intended to provide training and testing material for several classes of audiovisual speech recognizers including isolated word system, word-spotting systems, vocabulary independent systems, and speaker dependent or speaker independent systems for a wide range of applications. The audio database is composed of seven audio channels including, clean speech (captured using a close talk microphone), noisy speech from several microphones placed on the overhead of the cabin, noise only signal coming from the engine compartment and information about the speed of the car. For the video database, a small video camera sensible to the visible and the near infrared bands is placed on the windscreen and used to capture the face of the driver. This is done under different light conditions both during the day and at night. Additionally, the same individuals are recorded in laboratory, under controlled environment conditions to obtain noise free speech signals, 2D images and 3D + texture face models.",
"title": ""
},
{
"docid": "089273886a4d7ea591a7d631042be92b",
"text": "Student learning and academic performance hinge largely on frequency of class attendance and participation. The fingerprint recognition system aims at providing an accurate and efficient attendance management service to staff and students within an existing portal system. The integration of a unique and accurate identification system into the existing portal system offers at least, two advantages: accurate and efficient analysis and reporting of student attendance on a continuous basis; and also facilitating the provision of personalised services, enhancing user experience altogether. An integrated portal system was developed to automate attendance management and tested for fifty students in five attempts. The 98% accuracy achieved by the system points to the feasibility of large scale deployment and interoperability of multiple devices using existing technology infrastructure.",
"title": ""
},
{
"docid": "0b01d683bfd34d686c1b282b7f80024d",
"text": "Multimodal search-based dialogue is a challenging new task: It extends visually grounded question answering systems into multi-turn conversations with access to an external database. We address this new challenge by learning a neural response generation system from the recently released Multimodal Dialogue (MMD) dataset (Saha et al., 2017). We introduce a knowledge-grounded multimodal conversational model where an encoded knowledge base (KB) representation is appended to the decoder input. Our model substantially outperforms strong baselines in terms of text-based similarity measures (over 9 BLEU points, 3 of which are solely due to the use of additional information from the KB).",
"title": ""
},
{
"docid": "2c1914b6f4e373005a60d6a38117b610",
"text": "As it is known that food waste can be reduced by the larvae of Hermetia illucens (Black soldier fly, BSF), the scientific and commercial value of BSF larvae has increased recently. We hypothesised that the ability of catabolic degradation by BSF larvae might be due to intestinal microorganisms. Herein, we analysed the bacterial communities in the gut of BSF larvae by pyrosequencing of extracting intestinal metagenomic DNA from larvae that had been fed three different diets. The 16S rRNA sequencing results produced 9737, 9723 and 5985 PCR products from larval samples fed food waste, cooked rice and calf forage, respectively. A BLAST search using the EzTaxon program showed that the bacterial community in the gut of larvae fed three different diets was mainly composed of the four phyla with dissimilar proportions. Although the composition of the bacterial communities depended on the different nutrient sources, the identified bacterial strains in the gut of BSF larvae represented unique bacterial species that were unlike the intestinal microflora of other insects. Thus, our study analysed the structure of the bacterial communities in the gut of BSF larvae after three different feedings and assessed the application of particular bacteria for the efficient degradation of organic compounds.",
"title": ""
},
{
"docid": "259df0ad497b5fc3318dfca7f8ee1f9a",
"text": "BACKGROUND\nColorectal cancer is a leading cause of morbidity and mortality, especially in the Western world. The human and financial costs of this disease have prompted considerable research efforts to evaluate the ability of screening tests to detect the cancer at an early curable stage. Tests that have been considered for population screening include variants of the faecal occult blood test, flexible sigmoidoscopy and colonoscopy. Reducing mortality from colorectal cancer (CRC) may be achieved by the introduction of population-based screening programmes.\n\n\nOBJECTIVES\nTo determine whether screening for colorectal cancer using the faecal occult blood test (guaiac or immunochemical) reduces colorectal cancer mortality and to consider the benefits, harms and potential consequences of screening.\n\n\nSEARCH STRATEGY\nPublished and unpublished data for this review were identified by: Reviewing studies included in the previous Cochrane review; Searching several electronic databases (Cochrane Library, Medline, Embase, CINAHL, PsychInfo, Amed, SIGLE, HMIC); and Writing to the principal investigators of potentially eligible trials.\n\n\nSELECTION CRITERIA\nWe included in this review all randomised trials of screening for colorectal cancer that compared faecal occult blood test (guaiac or immunochemical) on more than one occasion with no screening and reported colorectal cancer mortality.\n\n\nDATA COLLECTION AND ANALYSIS\nData from the eligible trials were independently extracted by two reviewers. The primary data analysis was performed using the group participants were originally randomised to ('intention to screen'), whether or not they attended screening; a secondary analysis adjusted for non-attendence. We calculated the relative risks and risk differences for each trial, and then overall, using fixed and random effects models (including testing for heterogeneity of effects). We identified nine articles concerning four randomised controlled trials and two controlled trials involving over 320,000 participants with follow-up ranging from 8 to 18 years.\n\n\nMAIN RESULTS\nCombined results from the 4 eligible randomised controlled trials shows that participants allocated to screening had a 16% reduction in the relative risk of colorectal cancer mortality (RR 0.84, CI: 0.78-0.90). In the 3 studies that used biennial screening (Funen, Minnesota, Nottingham) there was a 15% relative risk reduction (RR 0.85, CI: 0.78-0.92) in colorectal cancer mortality. When adjusted for screening attendance in the individual studies, there was a 25% relative risk reduction (RR 0.75, CI: 0.66 - 0.84) for those attending at least one round of screening using the faecal occult blood test.\n\n\nAUTHORS' CONCLUSIONS\nBenefits of screening include a modest reduction in colorectal cancer mortality, a possible reduction in cancer incidence through the detection and removal of colorectal adenomas, and potentially, the less invasive surgery that earlier treatment of colorectal cancers may involve. Harmful effects of screening include the psycho-social consequences of receiving a false-positive result, the potentially significant complications of colonoscopy or a false-negative result, the possibility of overdiagnosis (leading to unnecessary investigations or treatment) and the complications associated with treatment.",
"title": ""
},
{
"docid": "da695403ee969f71ea01a4b16477556f",
"text": "Data augmentation is a widely used technique in many machine learning tasks, such as image classification, to virtually enlarge the training dataset size and avoid overfitting. Traditional data augmentation techniques for image classification tasks create new samples from the original training data by, for example, flipping, distorting, adding a small amount of noise to, or cropping a patch from an original image. In this paper, we introduce a simple but surprisingly effective data augmentation technique for image classification tasks. With our technique, named SamplePairing, we synthesize a new sample from one image by overlaying another image randomly chosen from the training data (i.e., taking an average of two images for each pixel). By using two images randomly selected from the training set, we can generate N new samples from N training samples. This simple data augmentation technique significantly improved classification accuracy for all the tested datasets; for example, the top-1 error rate was reduced from 33.5% to 29.0% for the ILSVRC 2012 dataset with GoogLeNet and from 8.22% to 6.93% in the CIFAR-10 dataset. We also show that our SamplePairing technique largely improved accuracy when the number of samples in the training set was very small. Therefore, our technique is more valuable for tasks with a limited amount of training data, such as medical imaging tasks.",
"title": ""
},
{
"docid": "617fa45a68d607a4cb169b1446aa94bd",
"text": "The Draganflyer is a radio-controlled helicopter. It is powered by 4 rotors and is capable of motion in air in 6 degrees of freedom and of stable hovering. For flying it requires a high degree of skill, with the operator continually making small adjustments. In this paper, we do a theoretical analysis of the dynamics of the Draganflyer in order to develop a model of it from which we can develop a computer control system for stable hovering and indoor flight.",
"title": ""
},
{
"docid": "952735cb937248c837e0b0244cd9dbb1",
"text": "Recently, the desired very high throughput of 5G wireless networks drives millimeter-wave (mm-wave) communication into practical applications. A phased array technique is required to increase the effective antenna aperture at mm-wave frequency. Integrated solutions of beamforming/beam steering are extremely attractive for practical implementations. After a discussion on the basic principles of radio beam steering, we review and explore the recent advanced integration techniques of silicon-based electronic integrated circuits (EICs), photonic integrated circuits (PICs), and antenna-on-chip (AoC). For EIC, the latest advanced designs of on-chip true time delay (TTD) are explored. Even with such advances, the fundamental loss of a silicon-based EIC still exists, which can be solved by advanced PIC solutions with ultra-broad bandwidth and low loss. Advanced PIC designs for mm-wave beam steering are then reviewed with emphasis on an optical TTD. Different from the mature silicon-based EIC, the photonic integration technology for PIC is still under development. In this paper, we review and explore the potential photonic integration platforms and discuss how a monolithic integration based on photonic membranes fits the photonic mm-wave beam steering application, especially for the ease of EIC and PIC integration on a single chip. To combine EIC, for its accurate and mature fabrication techniques, with PIC, for its ultra-broad bandwidth and low loss, a hierarchical mm-wave beam steering chip with large-array delays realized in PIC and sub-array delays realized in EIC can be a future-proof solution. Moreover, the antenna units can be further integrated on such a chip using AoC techniques. Among the mentioned techniques, the integration trends on device and system levels are discussed extensively.",
"title": ""
},
{
"docid": "5b1f9c744daf1798c3af8b717132f87f",
"text": "We have observed a growth in the number of qualitative studies that have no guiding set of philosophic assumptions in the form of one of the established qualitative methodologies. This lack of allegiance to an established qualitative approach presents many challenges for “generic qualitative” studies, one of which is that the literature lacks debate about how to do a generic study well. We encourage such debate and offer four basic requirements as a point of departure: noting the researchers’ position, distinguishing method and methodology, making explicit the approach to rigor, and identifying the researchers’ analytic lens.",
"title": ""
},
{
"docid": "85ab2edb48dd57f259385399437ea8e9",
"text": "Training robust deep video representations has proven to be much more challenging than learning deep image representations. This is in part due to the enormous size of raw video streams and the high temporal redundancy; the true and interesting signal is often drowned in too much irrelevant data. Motivated by that the superfluous information can be reduced by up to two orders of magnitude by video compression (using H.264, HEVC, etc.), we propose to train a deep network directly on the compressed video. This representation has a higher information density, and we found the training to be easier. In addition, the signals in a compressed video provide free, albeit noisy, motion information. We propose novel techniques to use them effectively. Our approach is about 4.6 times faster than Res3D and 2.7 times faster than ResNet-152. On the task of action recognition, our approach outperforms all the other methods on the UCF-101, HMDB-51, and Charades dataset.",
"title": ""
},
{
"docid": "620652a31904be950376332c7f97304d",
"text": "We combine two of the most popular approaches to automated Grammatical Error Correction (GEC): GEC based on Statistical Machine Translation (SMT) and GEC based on Neural Machine Translation (NMT). The hybrid system achieves new state-of-the-art results on the CoNLL-2014 and JFLEG benchmarks. This GEC system preserves the accuracy of SMT output and, at the same time, generates more fluent sentences as it typical for NMT. Our analysis shows that the created systems are closer to reaching human-level performance than any other GEC system reported so far.",
"title": ""
},
{
"docid": "ec1528a3f82caa4953c39a101e4d4311",
"text": "Electroencephalography (EEG) is the recording of electrical activity along the scalp of human brain. EEG is most often used to diagnose brain disorders i.e. epilepsy, sleep disorder, coma, brain death etc. EEG signals are frequently contaminated by Eye Blink Artifacts generated due to the opening and closing of eye lids during EEG recording. To analyse signal of EEG for diagnosis it is necessary that the EEG recording should be artifact free. This paper is based on the work to detect the presence of artifact and its actual position with extent in EEG recording. For the purpose of classification of artifact or non-artifact activity Artificial Neural Network (ANN) is used and for detection of contaminated zone the Discrete Wavelet Transform with level 6 Haar is used. The part of zone detection is necessary for further appropriate removal of artifactual activities from EEG recording without losing the background activity. The results demonstrated from the ANN classifier are very much promising such as- Sensitivity 98.21 %, Specificity 87.50 %, and Accuracy 95.83 %.",
"title": ""
},
{
"docid": "d5647902c65b76a86ea800f1ae60c37d",
"text": "Understanding the factors that impact the popularity dynamics of social media can drive the design of effective information services, besides providing valuable insights to content generators and online advertisers. Taking YouTube as case study, we analyze how video popularity evolves since upload, extracting popularity trends that characterize groups of videos. We also analyze the referrers that lead users to videos, correlating them, features of the video and early popularity measures with the popularity trend and total observed popularity the video will experience. Our findings provide fundamental knowledge about popularity dynamics and its implications for services such as advertising and search.",
"title": ""
},
{
"docid": "7d1faee4929d60d952cc8c2c12fa16d3",
"text": "We recently showed that improved perceptual performance on a visual motion direction–discrimination task corresponds to changes in how an unmodified sensory representation in the brain is interpreted to form a decision that guides behavior. Here we found that these changes can be accounted for using a reinforcement-learning rule to shape functional connectivity between the sensory and decision neurons. We modeled performance on the basis of the readout of simulated responses of direction-selective sensory neurons in the middle temporal area (MT) of monkey cortex. A reward prediction error guided changes in connections between these sensory neurons and the decision process, first establishing the association between motion direction and response direction, and then gradually improving perceptual sensitivity by selectively strengthening the connections from the most sensitive neurons in the sensory population. The results suggest a common, feedback-driven mechanism for some forms of associative and perceptual learning.",
"title": ""
},
{
"docid": "8f97a55eba1c9b3238b4c02d59dc8f52",
"text": "Despite the strong interests among practitioners, there is a knowledge gap with regard to online communities of practice. This study examines knowledge sharing among critical-care and advanced-practice nurses, who are engaged in a longstanding online community of practice. Data were collected about members’ online knowledge contribution as well as motivations for sharing or not sharing knowledge with others. In sum, 27 interviews with members and content analysis of approximately 400 messages were conducted. Data analysis showed that the most common types of knowledge shared were “Institutional Practice” and “Personal Opinion”. Five factors were found that helped motivate knowledge sharing: (a) self-selection type of membership, (b) desire to improve the nursing profession, (c) reciprocity, (d) a non-competitive environment, and (e) the role of the listserv moderator. Regarding barriers for knowledge sharing, four were found: (a) no new or additional knowledge to add, (b) unfamiliarity with subject, (c) lack of time, and (d) technology. These results will be informative to researchers and practitioners of online communities of practice.",
"title": ""
},
{
"docid": "d97a3b15b3a269d697d9936c1c192781",
"text": "In this paper, we take a queer linguistics approach to the analysis of data from British newspaper articles that discuss the introduction of same-sex marriage. Drawing on methods from CDA and corpus linguistics, we focus on the construction of agency in relation to the government extending marriage to same-sex couples, and those resisting this. We show that opponents to same-sex marriage are represented and represent themselves as victims whose moral values, traditions, and civil liberties are being threatened by the state. Specifically, we argue that victimhood is invoked in a way that both enables and permits discourses of implicit homophobia.",
"title": ""
}
] |
scidocsrr
|
ee89ad602cb9fc23256bf80cde48ed9e
|
Crowdsourcing, Attention and Productivity
|
[
{
"docid": "fc40a4af9411d0e9f494b13cbb916eac",
"text": "P (P2P) file sharing networks are an important medium for the distribution of information goods. However, there is little empirical research into the optimal design of these networks under real-world conditions. Early speculation about the behavior of P2P networks has focused on the role that positive network externalities play in improving performance as the network grows. However, negative network externalities also arise in P2P networks because of the consumption of scarce network resources or an increased propensity of users to free ride in larger networks, and the impact of these negative network externalities—while potentially important—has received far less attention. Our research addresses this gap in understanding by measuring the impact of both positive and negative network externalities on the optimal size of P2P networks. Our research uses a unique dataset collected from the six most popular OpenNap P2P networks between December 19, 2000, and April 22, 2001. We find that users contribute additional value to the network at a decreasing rate and impose costs on the network at an increasing rate, while the network increases in size. Our results also suggest that users are less likely to contribute resources to the network as the network size increases. Together, these results suggest that the optimal size of these centralized P2P networks is bounded—At some point the costs that a marginal user imposes on the network will exceed the value they provide to the network. This finding is in contrast to early predictions that larger P2P networks would always provide more value to users than smaller networks. Finally, these results also highlight the importance of considering user incentives—an important determinant of resource sharing in P2P networks—in network design.",
"title": ""
}
] |
[
{
"docid": "ca683d498e690198ca433050c3d91fd0",
"text": "Cross-graph Relational Learning (CGRL) refers to the problem of predicting the strengths or labels of multi-relational tuples of heterogeneous object types, through the joint inference over multiple graphs which specify the internal connections among each type of objects. CGRL is an open challenge in machine learning due to the daunting number of all possible tuples to deal with when the numbers of nodes in multiple graphs are large, and because the labeled training instances are extremely sparse as typical. Existing methods such as tensor factorization or tensor-kernel machines do not work well because of the lack of convex formulation for the optimization of CGRL models, the poor scalability of the algorithms in handling combinatorial numbers of tuples, and/or the non-transductive nature of the learning methods which limits their ability to leverage unlabeled data in training. This paper proposes a novel framework which formulates CGRL as a convex optimization problem, enables transductive learning using both labeled and unlabeled tuples, and offers a scalable algorithm that guarantees the optimal solution and enjoys a linear time complexity with respect to the sizes of input graphs. In our experiments with a subset of DBLP publication records and an Enzyme multi-source dataset, the proposed method successfully scaled to the large cross-graph inference problem, and outperformed other representative approaches significantly.",
"title": ""
},
{
"docid": "8c1d51dd52bc14e8952d9e319eaacf16",
"text": "This paper presents an approach to text recognition in natural scene images. Unlike most existing works which assume that texts are horizontal and frontal parallel to the image plane, our method is able to recognize perspective texts of arbitrary orientations. For individual character recognition, we adopt a bag-of-key points approach, in which Scale Invariant Feature Transform (SIFT) descriptors are extracted densely and quantized using a pre-trained vocabulary. Following [1, 2], the context information is utilized through lexicons. We formulate word recognition as finding the optimal alignment between the set of characters and the list of lexicon words. Furthermore, we introduce a new dataset called StreetViewText-Perspective, which contains texts in street images with a great variety of viewpoints. Experimental results on public datasets and the proposed dataset show that our method significantly outperforms the state-of-the-art on perspective texts of arbitrary orientations.",
"title": ""
},
{
"docid": "2aa492360133f8020abc3d02ec328a4a",
"text": "This paper conducts a performance analysis of two popular private blockchain platforms, Hyperledger Fabric and Ethereum (private deployment), to assess the performance and limitations of these state-of-the-art platforms. Blockchain, a decentralized transaction and data management technology, is said to be the technology that will have similar impacts as the Internet had on people's lives. Many industries have become interested in adopting blockchain in their IT systems, but scalability is an often- cited concern of current blockchain technology. Therefore, the goals of this preliminary performance analysis are twofold. First, a methodology for evaluating a blockchain platform is developed. Second, the analysis results are presented to inform practitioners in making decisions regarding adoption of blockchain technology in their IT systems. The experimental results, based on varying number of transactions, show that Hyperledger Fabric consistently outperforms Ethereum across all evaluation metrics which are execution time, latency and throughput. Additionally, both platforms are still not competitive with current database systems in term of performances in high workload scenarios.",
"title": ""
},
{
"docid": "16814284bc8ab287b8add1bf8930fee7",
"text": "It is cumbersome to write machine learning and graph algorithms in data-parallel models such as MapReduce and Dryad. We observe that these algorithms are based on matrix computations and, hence, are inefficient to implement with the restrictive programming and communication interface of such frameworks.\n In this paper we show that array-based languages such as R [3] are suitable for implementing complex algorithms and can outperform current data parallel solutions. Since R is single-threaded and does not scale to large datasets, we have built Presto, a distributed system that extends R and addresses many of its limitations. Presto efficiently shares sparse structured data, can leverage multi-cores, and dynamically partitions data to mitigate load imbalance. Our results show the promise of this approach: many important machine learning and graph algorithms can be expressed in a single framework and are substantially faster than those in Hadoop and Spark.",
"title": ""
},
{
"docid": "53dabbc33a041872783a109f953afd0f",
"text": "We present an analysis of parser performance on speech data, comparing word type and token frequency distributions with written data, and evaluating parse accuracy by length of input string. We find that parser performance tends to deteriorate with increasing length of string, more so for spoken than for written texts. We train an alternative parsing model with added speech data and demonstrate improvements in accuracy on speech-units, with no deterioration in performance on written text.",
"title": ""
},
{
"docid": "cd70cc8378fcfd5e4fdb06d62e3a7135",
"text": "Omni-directional visual content is a form of representing graphical and cinematic media content which provides subjects with the ability to freely change their direction of view. Along with virtual reality, omnidirectional imaging is becoming a very important type of the modern media content. This brings new challenges to the omnidirectional visual content processing, especially in the field of compression and quality evaluation. More specifically, the ability to assess quality of omnidirectional images in reliable manner is a crucial step to provide a rich quality of immersive experience. In this paper we introduce a testbed suitable for subjective evaluations of omnidirectional visual contents. We also show the results of a conducted pilot experiment to illustrate the applicability of the proposed testbed.",
"title": ""
},
{
"docid": "868c3c6de73d53f54ca6090e9559007f",
"text": "To generate useful summarization of data while maintaining privacy of sensitive information is a challenging task, especially in the big data era. The privacy-preserving principal component algorithm proposed in [1] is a promising approach when a low rank data summarization is desired. However, the analysis in [1] is limited to the case of a single principal component, which makes use of bounds on the vector-valued Bingham distribution in the unit sphere. By exploring the non-commutative structure of data matrices in the full Stiefel manifold, we extend the analysis to an arbitrary number of principal components. Our results are obtained by analyzing the asymptotic behavior of the matrix-variate Bingham distribution using tools from random matrix theory.",
"title": ""
},
{
"docid": "eeb19aa678342a2ff327283537d22f87",
"text": "We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with data-driven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera. One of the key contributions of this method is a double layer representation consisting of a complete parametric body shape inside, and a gradually fused outer surface layer. A pre-defined node graph on the body surface parameterizes the non-rigid deformations near the body, and a free-form dynamically changing graph parameterizes the outer surface layer far from the body, which allows more general reconstruction. We further propose a joint motion tracking method based on the double layer representation to enable robust and fast motion tracking performance. Moreover, the inner body shape is optimized online and forced to fit inside the outer surface layer. Overall, our method enables increasingly denoised, detailed and complete surface reconstructions, fast motion tracking performance and plausible inner body shape reconstruction in real-time. In particular, experiments show improved fast motion tracking and loop closure performance on more challenging scenarios.",
"title": ""
},
{
"docid": "b58c248a9da827ce3286be0a31b934fd",
"text": "Requirement Engineering (RE) plays an important role in the success of software development life cycle. As RE is the starting point of the life cycle, any changes in requirements will be costly and time consuming. Failure in determining accurate requirements leads to errors in specifications and therefore to a mal system architecture. In addition, most of software development environments are characterized by user requests to change some requirements.Scrum as one of agile development methods that gained a great attention because of its ability to deal with the changing environments. This paper presents and discusses the current situation of RE activities in Scrum, how Scrum benefits from RE techniques and future challenges in this respect.",
"title": ""
},
{
"docid": "d2430788229faccdeedd080b97d1741c",
"text": "Potentially, empowerment has much to offer health promotion. However, some caution needs to be exercised before the notion is wholeheartedly embraced as the major goal of health promotion. The lack of a clear theoretical underpinning, distortion of the concept by different users, measurement ambiguities, and structural barriers make 'empowerment' difficult to attain. To further discussion, th is paper proposes several assertions about the definition, components, process and outcome of 'empowerment', including the need for a distinction between psychological and community empowerment. These assertions and a model of community empowerment are offered in an attempt to clarify an important issue for health promotion.",
"title": ""
},
{
"docid": "244ae725a4dffb70d71fdb5c5382d2c3",
"text": ".................................................................................................................................... i Acknowledgements ................................................................................................................. iii List of Abbreviations .............................................................................................................. vi List of Figures ........................................................................................................................ vii List of Tables ......................................................................................................................... viii",
"title": ""
},
{
"docid": "5213aa65c5a291f0839046607dcf5f6c",
"text": "The distribution and mobility of chromium in the soils and sludge surrounding a tannery waste dumping area was investigated to evaluate its vertical and lateral movement of operational speciation which was determined in six steps to fractionate the material in the soil and sludge into (i) water soluble, (ii) exchangeable, (iii) carbonate bound, (iv) reducible, (v) oxidizable, and (vi) residual phases. The present study shows that about 63.7% of total chromium is mobilisable, and 36.3% of total chromium is nonbioavailable in soil, whereas about 30.2% of total chromium is mobilisable, and 69.8% of total chromium is non-bioavailable in sludge. In contaminated sites the concentration of chromium was found to be higher in the reducible phase in soils (31.3%) and oxidisable phases in sludge (56.3%) which act as the scavenger of chromium in polluted soils. These results also indicate that iron and manganese rich soil can hold chromium that will be bioavailable to plants and biota. Thus, results of this study can indicate the status of bioavailable of chromium in this area, using sequential extraction technique. So a suitable and proper management of handling tannery sludge in the said area will be urgently needed to the surrounding environment as well as ecosystems.",
"title": ""
},
{
"docid": "96718ecc3de9cc1b719a49cc2092f6f7",
"text": "n-gram statistical language model has been successfully applied to capture programming patterns to support code completion and suggestion. However, the approaches using n-gram face challenges in capturing the patterns at higher levels of abstraction due to the mismatch between the sequence nature in n-grams and the structure nature of syntax and semantics in source code. This paper presents GraLan, a graph-based statistical language model and its application in code suggestion. GraLan can learn from a source code corpus and compute the appearance probabilities of any graphs given the observed (sub)graphs. We use GraLan to develop an API suggestion engine and an AST-based language model, ASTLan. ASTLan supports the suggestion of the next valid syntactic template and the detection of common syntactic templates. Our empirical evaluation on a large corpus of open-source projects has shown that our engine is more accurate in API code suggestion than the state-of-the-art approaches, and in 75% of the cases, it can correctly suggest the API with only five candidates. ASTLan also has high accuracy in suggesting the next syntactic template and is able to detect many useful and common syntactic templates.",
"title": ""
},
{
"docid": "b4444a17513770702a389d0b9a373ef6",
"text": "The cluster between Internet of Things (IoT) and social networks (SNs) enables the connection of people to the ubiquitous computing universe. In this framework, the information coming from the environment is provided by the IoT, and the SN brings the glue to allow human-to-device interactions. This paper explores the novel paradigm for ubiquitous computing beyond IoT, denoted by Social Internet of Things (SIoT). Although there have been early-stage studies in social-driven IoT, they merely use one or some properties of SIoT to improve a number of specific performance variables. Therefore, this paper first addresses a complete view on SIoT and key perspectives to envision the real ubiquitous computing. Thereafter, a literature review is presented along with the evolutionary history of IoT research from Intranet of Things to SIoT. Finally, this paper proposes a generic SIoT architecture and presents a discussion about enabling technologies, research challenges, and open issues.",
"title": ""
},
{
"docid": "138fc7af52066e890b45afd96debbe91",
"text": "We present a general scheme for analyzing the performance of a generic localization algorithm for multilateration (MLAT) systems (or for other distributed sensor, passive localization technology). MLAT systems are used for airport surface surveillance and are based on time difference of arrival measurements of Mode S signals (replies and 1,090 MHz extended squitter, or 1090ES). In the paper, we propose to consider a localization algorithm as composed of two components: a data model and a numerical method, both being properly defined and described. In this way, the performance of the localization algorithm can be related to the proper combination of statistical and numerical performances. We present and review a set of data models and numerical methods that can describe most localization algorithms. We also select a set of existing localization algorithms that can be considered as the most relevant, and we describe them under the proposed classification. We show that the performance of any localization algorithm has two components, i.e., a statistical one and a numerical one. The statistical performance is related to providing unbiased and minimum variance solutions, while the numerical one is related to ensuring the convergence of the solution. Furthermore, we show that a robust localization (i.e., statistically and numerI. A. Mantilla-Gaviria · J. V. Balbastre-Tejedor Instituto ITACA, Universidad Politécnica de Valencia, Camino de Vera S/N, 46022 Edificio 8G, Acceso B, Valencia, Spain e-mail: [email protected] J. V. Balbastre-Tejedor e-mail: [email protected] M. Leonardi · G. Galati (B) DIE, Tor Vergata University, Via del Politecnico 1, 00133 Rome, Italy e-mail: [email protected]; [email protected] M. Leonardi e-mail: [email protected] ically efficient) strategy, for airport surface surveillance, has to be composed of two specific kind of algorithms. Finally, an accuracy analysis, by using real data, is performed for the analyzed algorithms; some general guidelines are drawn and conclusions are provided.",
"title": ""
},
{
"docid": "6adf6cd920abf2987be8963b2f1641d6",
"text": "This paper presents a diffusion method for generating terrains from a set of parameterized curves that characterize the landform features such as ridge lines, riverbeds or cliffs. Our approach provides the user with an intuitive vector-based feature-oriented control over the terrain. Different types of constraints (such as elevation, slope angle and roughness) can be attached to the curves so as to define the shape of the terrain. The terrain is generated from the curve representation by using an efficient multigrid diffusion algorithm. The algorithm can be efficiently implemented on the GPU, which allows the user to interactively create a vast variety of landscapes.",
"title": ""
},
{
"docid": "5ea9810117c2bf6fd036a9a544af5ffb",
"text": "Graph convolutional networks (GCNs) have been widely used for classifying graph nodes in the semi-supervised setting. Previous work have shown that GCNs are vulnerable to the perturbation on adjacency and feature matrices of existing nodes. However, it is unrealistic to change existing nodes in many applications, such as existing users in social networks. In this paper, we design algorithms to attack GCNs by adding fake nodes. A greedy algorithm is proposed to generate adjacency and feature matrices of fake nodes, aiming to minimize the classification accuracy on the existing nodes. In addition, we introduce a discriminator to classify fake nodes from real nodes, and propose a Greedy-GAN attack to simultaneously update the discriminator and the attacker, to make fake nodes indistinguishable to the real ones. Our non-targeted attack decreases the accuracy of GCN down to 0.10, and our targeted attack reaches a success rate of 99% on the whole datasets, and 94% on average for attacking a single target node.",
"title": ""
},
{
"docid": "84c362cb2d4a737d7ea62d85b9144722",
"text": "This paper considers mixed, or random coeff icients, multinomial logit (MMNL) models for discrete response, and establishes the following results: Under mild regularity conditions, any discrete choice model derived from random utilit y maximization has choice probabiliti es that can be approximated as closely as one pleases by a MMNL model. Practical estimation of a parametric mixing family can be carried out by Maximum Simulated Likelihood Estimation or Method of Simulated Moments, and easily computed instruments are provided that make the latter procedure fairl y eff icient. The adequacy of a mixing specification can be tested simply as an omitted variable test with appropriately defined artificial variables. An application to a problem of demand for alternative vehicles shows that MMNL provides a flexible and computationally practical approach to discrete response analysis. Acknowledgments: Both authors are at the Department of Economics, University of Cali fornia, Berkeley CA 94720-3880. Correspondence should be directed to [email protected]. We are indebted to the E. Morris Cox fund for research support, and to Moshe Ben-Akiva, David Brownstone, Denis Bolduc, Andre de Palma, and Paul Ruud for useful comments. This paper was first presented at the University of Paris X in June 1997.",
"title": ""
},
{
"docid": "02c204377e279bf7edeba4c130ae58d1",
"text": "Because of cloud computing's high degree of polymerization calculation mode, it can't give full play to the resources of the edge device such as computing, storage, etc. Fog computing can improve the resource utilization efficiency of the edge device, and solve the problem about service computing of the delay-sensitive applications. This paper researches on the framework of the fog computing, and adopts Cloud Atomization Technology to turn physical nodes in different levels into virtual machine nodes. On this basis, this paper uses the graph partitioning theory to build the fog computing's load balancing algorithm based on dynamic graph partitioning. The simulation results show that the framework of the fog computing after Cloud Atomization can build the system network flexibly, and dynamic load balancing mechanism can effectively configure system resources as well as reducing the consumption of node migration brought by system changes.",
"title": ""
},
{
"docid": "fe2bc36e704b663c8b9a72e7834e6c7e",
"text": "Driven by deep learning, there has been a surge of specialized processors for matrix multiplication, referred to as Tensor Core Units (TCUs). These TCUs are capable of performing matrix multiplications on small matrices (usually 4× 4 or 16×16) to accelerate the convolutional and recurrent neural networks in deep learning workloads. In this paper we leverage NVIDIA’s TCU to express both reduction and scan with matrix multiplication and show the benefits — in terms of program simplicity, efficiency, and performance. Our algorithm exercises the NVIDIA TCUs which would otherwise be idle, achieves 89%− 98% of peak memory copy bandwidth, and is orders of magnitude faster (up to 100× for reduction and 3× for scan) than state-of-the-art methods for small segment sizes — common in machine learning and scientific applications. Our algorithm achieves this while decreasing the power consumption by up to 22% for reduction and 16% for scan.",
"title": ""
}
] |
scidocsrr
|
584635cb28c385f55f258d123ff5b776
|
Androtrace: framework for tracing and analyzing IOs on Android
|
[
{
"docid": "9361c6eaa2faaa3cfebc4a073ee8f3d3",
"text": "In this paper we present the analysis of two large-scale network file system workloads. We measured CIFS traffic for two enterprise-class file servers deployed in the NetApp data center for a three month period. One file server was used by marketing, sales, and finance departments and the other by the engineering department. Together these systems represent over 22 TB of storage used by over 1500 employees, making this the first ever large-scale study of the CIFS protocol. We analyzed how our network file system workloads compared to those of previous file system trace studies and took an in-depth look at access, usage, and sharing patterns. We found that our workloads were quite different from those previously studied; for example, our analysis found increased read-write file access patterns, decreased read-write ratios, more random file access, and longer file lifetimes. In addition, we found a number of interesting properties regarding file sharing, file re-use, and the access patterns of file types and users, showing that modern file system workload has changed in the past 5–10 years. This change in workload characteristics has implications on the future design of network file systems, which we describe in the paper.",
"title": ""
},
{
"docid": "78952b9185a7fb1d8e7bd7723bb1021b",
"text": "We develop and apply two new methods for analyzing file system behavior and evaluating file system changes. First, semantic block-level analysis (SBA) combines knowledge of on-disk data structures with a trace of disk traffic to infer file syste m behavior; in contrast to standard benchmarking approaches, S BA enables users to understand why the file system behaves as it does. Second, semantic trace playback (STP) enables traces of disk traffic to be easily modified to represent changes in the fi le system implementation; in contrast to directly modifying t he file system, STP enables users to rapidly gauge the benefits of new policies. We use SBA to analyze Linux ext3, ReiserFS, JFS, and Windows NTFS; in the process, we uncover many strengths and weaknesses of these journaling file systems. We also appl y STP to evaluate several modifications to ext3, demonstratin g the benefits of various optimizations without incurring the cos ts of a real implementation.",
"title": ""
}
] |
[
{
"docid": "b414ed7d896bff259dc975bf16777fa7",
"text": "We propose in this work a general procedure to efficient EM-based design of single-layer SIW interconnects, including their transitions to microstrip lines. Our starting point is developed by exploiting available empirical knowledge for SIW. We propose an efficient SIW surrogate model for direct EM design optimization in two stages: first optimizing the SIW width to achieve the specified low cutoff frequency, followed by the transition optimization to reduce reflections and extend the dominant mode bandwidth. Our procedure is illustrated by designing a SIW interconnect on a standard FR4-based substrate.",
"title": ""
},
{
"docid": "683bad69cfb2c8980020dd1f8bd8cea4",
"text": "BRUTUS is a program that tells stories. The stories are intriguing, they hold a hint of mystery, and—not least impressive—they are written in correct English prose. An example (p. 124) is shown in Figure 1. This remarkable feat is grounded in a complex architecture making use of a number of levels, each of which is parameterized so as to become a locus of possible variation. The specific BRUTUS1 implementation that illustrates the program’s prowess exploits the theme of betrayal, which receives an elaborate analysis, culminating in a set",
"title": ""
},
{
"docid": "72cff6209ecea7538179aaf430876381",
"text": "A potential Mars Sample Return (MSR) mission would require robotic autonomous capture and manipulation of an Orbital Sample (OS) before returning the samples to Earth. An orbiter would capture the OS, manipulate to a preferential orientation, transition it through the steps required to break-the-chain with Mars, stowing it in a containment vessel or an Earth Entry Vehicle (EEV) and providing redundant containment to the OS (for example by closing and sealing the lid of the EEV). In this paper, we discuss the trade-space of concepts generated for both the individual aspects of capture and manipulation of the OS, as well as concepts for the end-to-end system. Notably, we discuss concepts for OS capture, manipulation of the OS to orient it to a preferred configuration, and steps for transitioning the OS between different stages of manipulation, ultimately securing it in a containment vessel or Earth Entry Vehicle.",
"title": ""
},
{
"docid": "04ff9fe1984fded27d638fe2552adf79",
"text": "While social networks can provide an ideal platform for upto-date information from individuals across the world, it has also proved to be a place where rumours fester and accidental or deliberate misinformation often emerges. In this article, we aim to support the task of making sense from social media data, and specifically, seek to build an autonomous message-classifier that filters relevant and trustworthy information from Twitter. For our work, we collected about 100 million public tweets, including users’ past tweets, from which we identified 72 rumours (41 true, 31 false). We considered over 80 trustworthiness measures including the authors’ profile and past behaviour, the social network connections (graphs), and the content of tweets themselves. We ran modern machine-learning classifiers over those measures to produce trustworthiness scores at various time windows from the outbreak of the rumour. Such time-windows were key as they allowed useful insight into the progression of the rumours. From our findings, we identified that our model was significantly more accurate than similar studies in the literature. We also identified critical attributes of the data that give rise to the trustworthiness scores assigned. Finally we developed a software demonstration that provides a visual user interface to allow the user to examine the analysis.",
"title": ""
},
{
"docid": "0632f4a3119246ee9cd7b858dc0c3ed4",
"text": "AIM\nIn order to improve the patients' comfort and well-being during and after a stay in the intensive care unit (ICU), the patients' perspective on the intensive care experience in terms of memories is essential. The aim of this study was to describe unpleasant and pleasant memories of the ICU stay in adult mechanically ventilated patients.\n\n\nMETHOD\nMechanically ventilated adults admitted for more than 24hours from two Swedish general ICUs were included and interviewed 5 days after ICU discharge using two open-ended questions. The data were analysed exploring the manifest content.\n\n\nFINDINGS\nOf the 250 patients interviewed, 81% remembered the ICU stay, 71% described unpleasant memories and 59% pleasant. Ten categories emerged from the content analyses (five from unpleasant and five from pleasant memories), contrasting with each other: physical distress and relief of physical distress, emotional distress and emotional well-being, perceptual distress and perceptual well-being, environmental distress and environmental comfort, and stress-inducing care and caring service.\n\n\nCONCLUSION\nMost critical care patients have both unpleasant and pleasant memories of their ICU stay. Pleasant memories such as support and caring service are important to relief the stress and may balance the impact of the distressing memories of the ICU stay.",
"title": ""
},
{
"docid": "e592ccd706b039b12cc4e724a7b217cd",
"text": "In fully distributed machine learning, privacy and security are important issues. These issues are often dealt with using secure multiparty computation (MPC). However, in our application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum of the inputs of a subset of participants assuming a semi-honest adversary. During the computation the participants learn no individual values. We apply this protocol to efficiently calculate the sum of gradients as part of a fully distributed mini-batch stochastic gradient descent algorithm. The protocol achieves scalability and robustness by exploiting the fact that in this application domain a “quick and dirty” sum computation is acceptable. In other words, speed and robustness takes precedence over precision. We analyze the protocol theoretically as well as experimentally based on churn statistics from a real smartphone trace. We derive a sufficient condition for preventing the leakage of an individual value, and we demonstrate the feasibility of the overhead of the protocol.",
"title": ""
},
{
"docid": "f7de8256c3d556a298e12cb555dd50b8",
"text": "Intrusion Detection Systems (IDSs) detects the network attacks by self-learning, etc. (9). Using Genetic Algorithms for intrusion detection has. Cloud Computing Using Genetic Algorithm. 1. Ku. To overcome this problem we are implementing intrusion detection system in which we use genetic. From Ignite at OSCON 2010, a 5 minute presentation by Bill Lavender: SNORT is popular. Based Intrusion Detection System (IDS), by applying Genetic Algorithm (GA) and Networking Using Genetic Algorithm (IDS) and Decision Tree is to identify. Intrusion Detection System Using Genetic Algorithm >>>CLICK HERE<<< Genetic algorithm (GA) has received significant attention for the design and length chromosomes (VLCs) in a GA-based network intrusion detection system. The proposed approach is tested using Defense Advanced Research Project. Abstract. Intrusion Detection System (IDS) is one of the key security components in today's networking environment. A great deal of attention has been recently. Computer security has become an important part of the day today's life. Not only single computer systems but an extensive network of the computer system. presents an overview of intrusion detection system and a hybrid technique for",
"title": ""
},
{
"docid": "3a2740b7f65841f7eb4f74a1fb3c9b65",
"text": "Getting a better understanding of user behavior is important for advancing information retrieval systems. Existing work focuses on modeling and predicting single interaction events, such as clicks. In this paper, we for the first time focus on modeling and predicting sequences of interaction events. And in particular, sequences of clicks. We formulate the problem of click sequence prediction and propose a click sequence model (CSM) that aims to predict the order in which a user will interact with search engine results. CSM is based on a neural network that follows the encoder-decoder architecture. The encoder computes contextual embeddings of the results. The decoder predicts the sequence of positions of the clicked results. It uses an attentionmechanism to extract necessary information about the results at each timestep. We optimize the parameters of CSM by maximizing the likelihood of observed click sequences. We test the effectiveness ofCSMon three new tasks: (i) predicting click sequences, (ii) predicting the number of clicks, and (iii) predicting whether or not a user will interact with the results in the order these results are presented on a search engine result page (SERP). Also, we show that CSM achieves state-of-the-art results on a standard click prediction task, where the goal is to predict an unordered set of results a user will click on.",
"title": ""
},
{
"docid": "f86078de4b011a737b6bdedd86b4e82f",
"text": "Alarm fatigue can adversely affect nurses’ efficiency and concentration on their tasks, which is a threat to patients’ safety. The purpose of the present study was to develop and test the psychometric accuracy of an alarm fatigue questionnaire for nurses. This study was conducted in two stages: in stage one, in order to establish the different aspects of the concept of alarm fatigue, the researchers reviewed the available literature—articles and books—on alarm fatigue, and then consulted several experts in a meeting to define alarm fatigue and develop statements for the questionnaire. In stage two, after the final draft had been approved, the validity of the instrument was measured using the two methods of face validity (the quantitative and qualitative approaches) and content validity (the qualitative and quantitative approaches). Test–retest, Cronbach’s alpha, and Principal Component Analysis were used for item reduction and reliability analysis. Based on the results of stage one, the researchers extracted 30 statements based on a 5-point Likert scale. In stage two, after the face and content validity of the questionnaire had been established, 19 statements were left in the instrument. Based on factor loadings of the items and “alpha if item deleted” and after the second round of consultation with the expert panel, six items were removed from the scale. The test of the reliability of nurses’ alarm fatigue questionnaire based on the internal homogeneity and retest methods yielded the following results: test–retest correlation coefficient = 0.99; Guttman split-half correlation coefficient = 0.79; Cronbach’s alpha = 0.91. Regarding the importance of recognizing alarm fatigue in nurses, there is need for an instrument to measure the phenomenon. The results of the study show that the developed questionnaire is valid and reliable enough for measuring alarm fatigue in nurses.",
"title": ""
},
{
"docid": "3b91e62d6e43172e68817f679dde5182",
"text": "We model the geodetically observed secular velocity field in northwestern Turkey with a block model that accounts for recoverable elastic-strain accumulation. The block model allows us to estimate internally consistent fault slip rates and locking depths. The northern strand of the North Anatolian fault zone (NAFZ) carries approximately four times as much right-lateral motion ( 24 mm/yr) as does the southern strand. In the Marmara Sea region, the data show strain accumulation to be highly localized. We find that a straight fault geometry with a shallow locking depth of 6–7 km fits the observed Global Positioning System velocities better than does a stepped fault geometry that follows the northern and eastern edges of the sea. This shallow locking depth suggests that the moment release associated with an earthquake on these faults should be smaller, by a factor of 2.3, than previously inferred assuming a locking depth of 15 km. Online material: an updated version of velocity-field data.",
"title": ""
},
{
"docid": "1ae7ea1102f7d32c40a0e5da0d3a8256",
"text": "Unequal access to new technology is often referred to as the \"digital divide.\" But the notion of a digital divide is unclear. This paper explores the concept by attention to prior research on information access. It considers three forms of access, to a device, to an ongoing conduit, and to new social practices, with the latter being the most encompassing and valuable. Earlier research on literacy provides a useful framework for an interpretation of the digital divide based on practices, rather than merely devices or conduits. Both literacy and technology access are multiple, context-dependent, stratified along continua, tied closely for their benefits to particular functions, and dependent on not only education and culture but also power. They also both entail new forms of semiotic interpretation and production. Research in schools illuminates the importance of a precise understanding of the digital divide. Educational reform efforts that place emphasis on a device, such as the One Laptop per Child program, have proven unsuccessful, while those that support new forms of meaning-making and social engagement bring more significant benefits.",
"title": ""
},
{
"docid": "7f2fcc4b4af761292d3f77ffd1a2f7c3",
"text": "An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.",
"title": ""
},
{
"docid": "6e01d0d9b403f8bae201baa68e04fece",
"text": "OBJECTIVE\nTo apply a mathematical model to determine the relative effectiveness of various tip-plasty maneuvers while the lateral crura are in cephalic position compared with orthotopic position.\n\n\nMETHODS\nA Matlab (MathWorks, Natick, Massachusetts) computer program, called the Tip-Plasty Simulator, was developed to model the medial and lateral crura of the tripod concept in order to estimate the change in projection, rotation, and nasal length yielded by changes in crural length. The following rhinoplasty techniques were modeled in the software program: columellar strut graft/tongue-in-groove, lateral crural steal, lateral crural overlay, medial/intermediate crural overlay, hinge release with alar strut graft, and lateral crural repositioning.\n\n\nRESULTS\nUsing the Tip-Plasty Simulator, the directionality of the change in projection, rotation, and nasal length produced by the various tip-plasty maneuvers, as shown by our mathematical model, is largely the same as that expected and observed clinically. Notably, cephalically positioned lateral crura affected the results of the rhinoplasty maneuvers studied.\n\n\nCONCLUSIONS\nBy demonstrating a difference in the magnitude of change resulting from various rhinoplasty maneuvers, the results of this study enhance the ability of the rhinoplasty surgeon to predict the effects of various tip-plasty maneuvers, given the variable range in alar cartilage orientation that he or she is likely to encounter.",
"title": ""
},
{
"docid": "4a75586965854ba2cba2fed18528e72b",
"text": "Although there have been some promising results in computer lipreading, there has been a paucity of data on which to train automatic systems. However the recent emergence of the TCDTIMIT corpus, with around 6000 words, 59 speakers and seven hours of recorded audio-visual speech, allows the deployment of more recent techniques in audio-speech such as Deep Neural Networks (DNNs) and sequence discriminative training. In this paper we combine the DNN with a Hidden Markov Model (HMM) to the, so called, hybrid DNN-HMM configuration which we train using a variety of sequence discriminative training methods. This is then followed with a weighted finite state transducer. The conclusion is that the DNN offers very substantial improvement over a conventional classifier which uses a Gaussian Mixture Model (GMM) to model the densities even when optimised with Speaker Adaptive Training. Sequence adaptive training offers further improvements depending on the precise variety employed but those improvements are of the order of 10% improvement in word accuracy. Putting these two results together implies that lipreading is moving from something of rather esoteric interest to becoming a practical reality in the foreseeable future.",
"title": ""
},
{
"docid": "9be80d8f93dd5edd72ecd759993935d6",
"text": "The excretory system regulates the chemical composition of body fluids by removing metabolic wastes and retaining the proper amount of water, salts and nutrients. The invertebrate excretory structures are classified in according to their marked variations in the morphological structures into three types included contractile vacuoles in protozoa, nephridia (flame cell system) in most invertebrate animals and Malpighian tubules (arthropod kidney) in insects [2]. There are three distinct excretory organs formed in succession during the development of the vertebrate kidney, they are called pronephros, mesonephros and metanephros. The pronephros is the most primitive one and exists as a functional kidney only in some of the lowest fishes and is called the archinephros. The mesonephros represents the functional excretory organs in anamniotes and called as opisthonephros. The metanephros is the most caudally located of the excretory organs and the last to appear, it represents the functional kidney in amniotes [2-4].",
"title": ""
},
{
"docid": "f37d32a668751198ed8acde8ab3bdc12",
"text": "INTRODUCTION\nAlthough the critical feature of attention-deficit/hyperactivity disorder (ADHD) is a persistent pattern of inattention and/or hyperactivity/impulsivity behavior, the disorder is clinically heterogeneous, and concomitant difficulties are common. Children with ADHD are at increased risk for experiencing lifelong impairments in multiple domains of daily functioning. In the present study we aimed to build a brief ADHD impairment-related tool -ADHD concomitant difficulties scale (ADHD-CDS)- to assess the presence of some of the most important comorbidities that usually appear associated with ADHD such as emotional/motivational management, fine motor coordination, problem-solving/management of time, disruptive behavior, sleep habits, academic achievement and quality of life. The two main objectives of the study were (i) to discriminate those profiles with several and important ADHD functional difficulties and (ii) to create a brief clinical tool that fosters a comprehensive evaluation process and can be easily used by clinicians.\n\n\nMETHODS\nThe total sample included 399 parents of children with ADHD aged 6-18 years (M = 11.65; SD = 3.1; 280 males) and 297 parents of children without a diagnosis of ADHD (M = 10.91; SD = 3.2; 149 male). The scale construction followed an item improved sequential process.\n\n\nRESULTS\nFactor analysis showed a 13-item single factor model with good fit indices. Higher scores on inattention predicted higher scores on ADHD-CDS for both the clinical sample (β = 0.50; p < 0.001) and the whole sample (β = 0.85; p < 0.001). The ROC curve for the ADHD-CDS (against the ADHD diagnostic status) gave an area under the curve (AUC) of.979 (95%, CI = [0.969, 0.990]).\n\n\nDISCUSSION\nThe ADHD-CDS has shown preliminary adequate psychometric properties, with high convergent validity and good sensitivity for different ADHD profiles, which makes it a potentially appropriate and brief instrument that may be easily used by clinicians, researchers, and health professionals in dealing with ADHD.",
"title": ""
},
{
"docid": "6465b2af36350a444fbc6682540ff21d",
"text": "We present an algorithm for finding an <i>s</i>-sparse vector <i>x</i> that minimizes the <i>square-error</i> ∥<i>y</i> -- Φ<i>x</i>∥<sup>2</sup> where Φ satisfies the <i>restricted isometry property</i> (RIP), with <i>isometric constant</i> Δ<sub>2<i>s</i></sub> < 1/3. Our algorithm, called <b>GraDeS</b> (Gradient Descent with Sparsification) iteratively updates <i>x</i> as: [EQUATION]\n where γ > 1 and <i>H<sub>s</sub></i> sets all but <i>s</i> largest magnitude coordinates to zero. <b>GraDeS</b> converges to the correct solution in constant number of iterations. The condition Δ<sub>2<i>s</i></sub> < 1/3 is most general for which a <i>near-linear time</i> algorithm is known. In comparison, the best condition under which a polynomial-time algorithm is known, is Δ<sub>2<i>s</i></sub> < √2 -- 1.\n Our Matlab implementation of <b>GraDeS</b> outperforms previously proposed algorithms like Subspace Pursuit, StOMP, OMP, and Lasso by an order of magnitude. Curiously, our experiments also uncovered cases where L1-regularized regression (Lasso) fails but <b>GraDeS</b> finds the correct solution.",
"title": ""
},
{
"docid": "7adbcbcf5d458087d6f261d060e6c12b",
"text": "Operation of MOS devices in the strong, moderate, and weak inversion regions is considered. The advantages of designing the input differential stage of a CMOS op amp to operate in the weak or moderate inversion region are presented. These advantages include higher voltage gain, less distortion, and ease of compensation. Specific design guidelines are presented to optimize amplifier performance. Simulations that demonstrate the expected improvements are given.",
"title": ""
},
{
"docid": "9ee1765f945c8164af6e09a836402e3e",
"text": "0167-8655/$ see front matter 2012 Elsevier B.V. A http://dx.doi.org/10.1016/j.patrec.2012.05.019 ⇑ Corresponding author at: Instituto Superior de E Portugal. E-mail address: [email protected] (A.J. Ferreira). Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be computationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10 features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster. 2012 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
b288909b35f845a172e7d4e5a7793e3a
|
A TEACHER STUDENT NETWORK FOR FASTER VIDEO CLASSIFICATION
|
[
{
"docid": "48c03a33c5d34b246dce4932ef0fa16e",
"text": "We present a solution to “Google Cloud and YouTube8M Video Understanding Challenge” that ranked 5th place. The proposed model is an ensemble of three model families, two frame level and one video level. The training was performed on augmented dataset, with cross validation.",
"title": ""
},
{
"docid": "87b67f9ed23c27a71b6597c94ccd6147",
"text": "Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.",
"title": ""
}
] |
[
{
"docid": "a2b199daef2ba734700531f41ab42fdb",
"text": "Joint object detection and semantic segmentation can be applied to many fields, such as self-driving cars and unmanned surface vessels. An initial and important progress towards this goal has been achieved by simply sharing the deep convolutional features for the two tasks. However, this simple scheme is unable to make full use of the fact that detection and segmentation are mutually beneficial. To overcome this drawback, we propose a framework called TripleNet where triple supervisions including detection-oriented supervision, class-aware segmentation supervision, and class-agnostic segmentation supervision are imposed on each layer of the decoder network. Classagnostic segmentation supervision provides an objectness prior knowledge for both semantic segmentation and object detection. Besides the three types of supervisions, two light-weight modules (i.e., inner-connected module and attention skip-layer fusion) are also incorporated into each layer of the decoder. In the proposed framework, detection and segmentation can sufficiently boost each other. Moreover, class-agnostic and class-aware segmentation on each decoder layer are not performed at the test stage. Therefore, no extra computational costs are introduced at the test stage. Experimental results on the VOC2007 and VOC2012 datasets demonstrate that the proposed TripleNet is able to improve both the detection and segmentation accuracies without adding extra computational costs.",
"title": ""
},
{
"docid": "06c4388fb519484577d5c5556f369263",
"text": "This paper proposes new research themes concerning decision support in intermodal transport. Decision support models have been constructed for private stakeholders (e.g. network operators, drayage operators, terminal operators or intermodal operators) as well as for public actors such as policy makers and port authorities. Intermodal research topics include policy support, terminal network design, intermodal service network design, intermodal routing, drayage operations and ICT innovations. For each research topic, the current state of the art and gaps in existing models are identified. Current trends in intermodal decision support models include the introduction of environmental concerns, the development of dynamic models and the growth in innovative applications of Operations Research techniques. Limited data availability and problem size (network scale) and related computational considerations are issues which increase the complexity of decision support in intermodal transport. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a3b4ef83e513e7541cd6c1517bf0f605",
"text": "All cellular proteins undergo continuous synthesis and degradation. This permanent renewal is necessary to maintain a functional proteome and to allow rapid changes in levels of specific proteins with regulatory purposes. Although for a long time lysosomes were considered unable to contribute to the selective degradation of individual proteins, the discovery of chaperone-mediated autophagy (CMA) changed this notion. Here, we review the characteristics that set CMA apart from other types of lysosomal degradation and the subset of molecules that confer cells the capability to identify individual cytosolic proteins and direct them across the lysosomal membrane for degradation.",
"title": ""
},
{
"docid": "42bf428e3c6a4b3c4cb46a2735de872d",
"text": "We have developed a low cost software radio based platform for monitoring EPC Gen 2 RFID traffic. The Gen 2 standard allows for a range of PHY layer configurations and does not specify exactly how to compose protocol messages to inventory tags. This has made it difficult to know how well the standard works, and how it is implemented in practice. Our platform provides much needed visibility into Gen 2 systems by capturing reader transmissions using the USRP2 and decoding them in real-time using software we have developed and released to the public. In essence, our platform delivers much of the functionality of expensive (< $50,000) conformance testing products, with greater extensibility at a small fraction of the cost. In this paper, we present the design and implementation of the platform and evaluate its effectiveness, showing that it has better than 99% accuracy up to 3 meters. We then use the platform to study a commercial RFID reader, showing how the Gen 2 standard is realized, and indicate avenues for research at both the PHY and MAC layers.",
"title": ""
},
{
"docid": "44fdf1c17ebda2d7b2967c84361a5d9a",
"text": "A high-efficiency power amplifier (PA) is important in a Megahertz wireless power transfer (WPT) system. It is attractive to apply the Class-E PA for its simple structure and high efficiency. However, the conventional design for Class-E PA can only ensure a high efficiency for a fixed load. It is necessary to develop a high-efficiency Class-E PA for a wide-range load in WPT systems. A novel design method for Class-E PA is proposed to achieve this objective in this paper. The PA achieves high efficiency, above 80%, for a load ranging from 10 to 100 Ω at 6.78 MHz in the experiment.",
"title": ""
},
{
"docid": "c2fd86b36364ac9c40e873176443c4c8",
"text": "In a public service announcement on 17 March 2016, the Federal Bureau of Investigation jointly with the U.S. Department of Transportation and the National Highway Traffic Safety Administration (NHTSA) released a warning regarding the increasing vulnerability of motor vehicles to remote exploits [18]. Engine shutdowns, disabled brakes, and locked doors are a few examples of possible vehicle cybersecurity attacks. Modern cars grow into a new target for cyberattacks as they become increasingly connected. While driving on the road, sharks (i.e., hackers) need only to be within communication range of a vehicle to attack it. However, in some cases, they can hack into it while they are miles away. In this article, we aim to illuminate the latest vehicle cybersecurity threats including malware attacks, on-board diagnostic (OBD) vulnerabilities, and automobile apps threats. We illustrate the in-vehicle network architecture and demonstrate the latest defending mechanisms designed to mitigate such threats.",
"title": ""
},
{
"docid": "bb416322f9ce64045f2bd98cfeacb715",
"text": "This abstract presents our preliminary results on development of a cognitive assistant system for emergency response that aims to improve situational awareness and safety of first responders. This system integrates a suite of smart wearable sensors, devices, and analytics for real-time collection and analysis of in-situ data from incident scene and providing dynamic data-driven insights to responders on the most effective response actions to take.",
"title": ""
},
{
"docid": "04f1893ab7bd601bf1977558f480494d",
"text": "This paper describes a method for generative player modeling and its application to the automatic testing of game content using archetypal player models called procedural personas. Theoretically grounded in psychological decision theory, procedural personas are implemented using a variation of Monte Carlo Tree Search (MCTS) where the node selection criteria are developed using evolutionary computation, replacing the standard UCB1 criterion of MCTS. Using these personas we demonstrate how generative player models can be applied to a varied corpus of game levels and demonstrate how different play styles can be enacted in each level. In short, we use artificially intelligent personas to construct synthetic playtesters. The proposed approach could be used as a tool for automatic play testing when human feedback is not readily available or when quick visualization of potential interactions is necessary. Possible applications include interactive tools during game development or procedural content generation systems where many evaluations must be conducted within a short time span.",
"title": ""
},
{
"docid": "1cecb4765c865c0f44c76f5ed2332c13",
"text": "Speaker indexing or diarization is an important task in audio processing and retrieval. Speaker diarization is the process of labeling a speech signal with labels corresponding to the identity of speakers. This paper includes a comprehensive review on the evolution of the technology and different approaches in speaker indexing and tries to offer a fully detailed discussion on these approaches and their contributions. This paper reviews the most common features for speaker diarization in addition to the most important approaches for speech activity detection (SAD) in diarization frameworks. Two main tasks of speaker indexing are speaker segmentation and speaker clustering. This paper includes a separate review on the approaches proposed for these subtasks. However, speaker diarization systems which combine the two tasks in a unified framework are also introduced in this paper. Another discussion concerns the approaches for online speaker indexing which has fundamental differences with traditional offline approaches. Other parts of this paper include an introduction on the most common performance measures and evaluation datasets. To conclude this paper, a complete framework for speaker indexing is proposed, which is aimed to be domain independent and parameter free and applicable for both online and offline applications. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "516eb5f2160659cb1ef57a5a826efc64",
"text": "To describe physical activity (PA) and sedentary behavior (SB) patterns before and during pregnancy among Chinese, Malay and Indian women. In addition, to investigate determinants of change in PA and SB during pregnancy. The Growing Up in Singapore Towards healthy Outcomes cohort recruited first trimester pregnant women. PA and SB (sitting time and television time) before and during pregnancy were assessed as a part of an interview questionnaire at weeks 26–28 gestational clinic visit. Total energy expenditure (TEE) on PA and time in SB were calculated. Determinants of change in PA and SB were investigated using multiple logistic regression analysis. PA and SB questions were answered by 94 % (n = 1171) of total recruited subjects. A significant reduction in TEE was observed from before to during pregnancy [median 1746.0–1039.5 metabolic equivalent task (MET) min/week, p < 0.001]. The proportion of women insufficiently active (<600 MET-min/week) increased from 19.0 to 34.1 % (p < 0.001). Similarly, sitting time (median 56.0–63.0 h/week, p < 0.001) and television time (mean 16.1–16.7 h/week, p = 0.01) increased. Women with higher household income, lower level of perceived health, nausea/vomiting during pregnancy and higher level of pre-pregnancy PA were more likely to reduce PA. Women with children were less likely to reduce PA. Women reporting nausea/vomiting and lower level of pre-pregnancy sitting time were more likely to increase sitting time. Participants substantially reduced PA and increased SB by 26–28 weeks of pregnancy. Further research is needed to better understand determinants of change in PA and SB and develop effective health promotion strategies.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "3bc9eb46e389b7be4141950142c606dd",
"text": "Within this contribution, we outline the use of the new automation standards family OPC Unified Architecture (IEC 62541) in scope with the IEC 61850 field automation standard. The IEC 61850 provides both an abstract data model and an abstract communication interface. Different technology mappings to implement the model exist. With the upcoming OPC UA, a new communication model to implement abstract interfaces has been introduced. We outline its use in this contribution and also give examples on how it can be used alongside the IEC 61970 Common Information Model to properly integrate ICT and field automation at communication standards level.",
"title": ""
},
{
"docid": "982ee984dda5930b025ac93749c3cf3f",
"text": "We present an application for the simulation of errors in storage systems. The software is completely parameterizable in order to simulate different types of disk errors and disk array configurations. It can be used to verify and optimize error correction schemes for storage. Realistic simulation of disk errors is a complex task as many test rounds need to be performed in order to characterize the performance of an algorithm based on highly sporadic errors under a large variety of parameters. The software allows different levels of abstraction to perform quick tests for rough estimations as well as detailed configurations for more realistic but complex simulation runs. We believe that this simulation software is the first one that is able to cover a complete range of disk error types in many commonly used disk array configurations.",
"title": ""
},
{
"docid": "a60436a4b4152fbfef04b5c09f740636",
"text": "Detection of surgical instruments plays a key role in ensuring patient safety in minimally invasive surgery. In this paper, we present a novel method for 2D vision-based recognition and pose estimation of surgical instruments that generalizes to different surgical applications. At its core, we propose a novel scene model in order to simultaneously recognize multiple instruments as well as their parts. We use a Convolutional Neural Network architecture to embody our model and show that the cross-entropy loss is well suited to optimize its parameters which can be trained in an end-to-end fashion. An additional advantage of our approach is that instrument detection at test time is achieved while avoiding the need for scale-dependent sliding window evaluation. This allows our approach to be relatively parameter free at test time and shows good performance for both instrument detection and tracking. We show that our approach surpasses state-of-the-art results on in-vivo retinal microsurgery image data, as well as ex-vivo laparoscopic sequences.",
"title": ""
},
{
"docid": "737ef89cc5f264dcb13be578129dca64",
"text": "We present a new approach to extracting keyphrases based on statistical language models. Our approach is to use pointwise KL-divergence between multiple language models for scoring both phraseness and informativeness, which can be unified into a single score to rank extracted phrases.",
"title": ""
},
{
"docid": "20b00a2cc472dfec851f4aea42578a9e",
"text": "The self-regulatory strength model maintains that all acts of self-regulation, self-control, and choice result in a state of fatigue called ego-depletion. Self-determination theory differentiates between autonomous regulation and controlled regulation. Because making decisions represents one instance of self-regulation, the authors also differentiate between autonomous choice and controlled choice. Three experiments support the hypothesis that whereas conditions representing controlled choice would be egodepleting, conditions that represented autonomous choice would not. In Experiment 3, the authors found significant mediation by perceived self-determination of the relation between the choice condition (autonomous vs. controlled) and ego-depletion as measured by performance.",
"title": ""
},
{
"docid": "99a7cab192f636c940cfbe0b57d42ab3",
"text": "In this paper we propose a competition learning approach to coreference resolution. Traditionally, supervised machine learning approaches adopt the singlecandidate model. Nevertheless the preference relationship between the antecedent candidates cannot be determined accurately in this model. By contrast, our approach adopts a twin-candidate learning model. Such a model can present the competition criterion for antecedent candidates reliably, and ensure that the most preferred candidate is selected. Furthermore, our approach applies a candidate filter to reduce the computational cost and data noises during training and resolution. The experimental results on MUC-6 and MUC-7 data set show that our approach can outperform those based on the singlecandidate model.",
"title": ""
},
{
"docid": "96c30be2e528098e86b84b422d5a786a",
"text": "The LSTM is a popular neural network model for modeling or analyzing the time-varying data. The main operation of LSTM is a matrix-vector multiplication and it becomes sparse (spMxV) due to the widely-accepted weight pruning in deep learning. This paper presents a new sparse matrix format, named CBSR, to maximize the inference speed of the LSTM accelerator. In the CBSR format, speed-up is achieved by balancing out the computation loads over PEs. Along with the new format, we present a simple network transformation to completely remove the hardware overhead incurred when using the CBSR format. Also, the detailed analysis on the impact of network size or the number of PEs is performed, which lacks in the prior work. The simulation results show 16∼38% improvement in the system performance compared to the well-known CSC/CSR format. The power analysis is also performed in 65nm CMOS technology to show 9∼22% energy savings.",
"title": ""
},
{
"docid": "54d9985cd849605eb1c4c1369fc734cb",
"text": "Arjan Graybill Clinical Profile of the Juvenile Delinquent 1999 Dr. J. Klanderman Seminar in School Psychology This study attempted to explore the relationship that a juvenile delinquent has with three major influences: school, peers, and family. It was hypothesized that juvenile delinquents possess a poor relationship with these influences. Subjects were administered a survey which assesses the relationship with school, peers and family. 19 inmates in a juvenile detention center were administered the survey. There were 15 subjects in the control group who were administered the survey as well. Results from independent tscores reveal a significant difference in the relationship with school, peers, and family for the two groups. Juvenile delinquents were found to have a poor relationship with these major influences.",
"title": ""
},
{
"docid": "fb66a74a7cb4aa27556b428e378353a8",
"text": "This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Abstract—High-resolution radar sensors are able to resolve multiple measurements per object and therefore provide valuable information for vehicle environment perception. For instance, multiple measurements allow to infer the size of an object or to more precisely measure the object’s motion. Yet, the increased amount of data raises the demands on tracking modules: measurement models that are able to process multiple measurements for an object are necessary and measurement-toobject associations become more complex. This paper presents a new variational radar model for vehicles and demonstrates how this model can be incorporated in a Random-Finite-Setbased multi-object tracker. The measurement model is learned from actual data using variational Gaussian mixtures and avoids excessive manual engineering. In combination with the multiobject tracker, the entire process chain from the raw measurements to the resulting tracks is formulated probabilistically. The presented approach is evaluated on experimental data and it is demonstrated that data-driven measurement model outperforms a manually designed model.",
"title": ""
}
] |
scidocsrr
|
566a6f2de0beccb2a5ca94a42ef6305a
|
Interval-based Queries over Multiple Streams with Missing Timestamps
|
[
{
"docid": "f13000c4870a85e491f74feb20f9b2d4",
"text": "Complex Event Processing (CEP) is a stream processing model that focuses on detecting event patterns in continuous event streams. While the CEP model has gained popularity in the research communities and commercial technologies, the problem of gracefully degrading performance under heavy load in the presence of resource constraints, or load shedding, has been largely overlooked. CEP is similar to “classical” stream data management, but addresses a substantially different class of queries. This unfortunately renders the load shedding algorithms developed for stream data processing inapplicable. In this paper we study CEP load shedding under various resource constraints. We formalize broad classes of CEP load-shedding scenarios as different optimization problems. We demonstrate an array of complexity results that reveal the hardness of these problems and construct shedding algorithms with performance guarantees. Our results shed some light on the difficulty of developing load-shedding algorithms that maximize utility.",
"title": ""
},
{
"docid": "86b12f890edf6c6561536a947f338feb",
"text": "Looking for qualified reading resources? We have process mining discovery conformance and enhancement of business processes to check out, not only review, yet also download them or even read online. Discover this great publication writtern by now, simply right here, yeah just right here. Obtain the data in the sorts of txt, zip, kindle, word, ppt, pdf, as well as rar. Once again, never ever miss out on to read online as well as download this publication in our site here. Click the link. Our goal is always to offer you an assortment of cost-free ebooks too as aid resolve your troubles. We have got a considerable collection of totally free of expense Book for people from every single stroll of life. We have got tried our finest to gather a sizable library of preferred cost-free as well as paid files.",
"title": ""
}
] |
[
{
"docid": "266114ecdd54ce1c5d5d0ec42c04ed4d",
"text": "A multiscale image registration technique is presented for the registration of medical images that contain significant levels of noise. An overview of the medical image registration problem is presented, and various registration techniques are discussed. Experiments using mean squares, normalized correlation, and mutual information optimal linear registration are presented that determine the noise levels at which registration using these techniques fails. Further experiments in which classical denoising algorithms are applied prior to registration are presented, and it is shown that registration fails in this case for significantly high levels of noise, as well. The hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese [20] is presented, and accurate registration of noisy images is achieved by obtaining a hierarchical multiscale decomposition of the images and registering the resulting components. This approach enables successful registration of images that contain noise levels well beyond the level at which ordinary optimal linear registration fails. Image registration experiments demonstrate the accuracy and efficiency of the multiscale registration technique, and for all noise levels, the multiscale technique is as accurate as or more accurate than ordinary registration techniques.",
"title": ""
},
{
"docid": "9aa0fef27776e833b755ee8549ba820b",
"text": "CNNs have made an undeniable impact on computer vision through the ability to learn high-capacity models with large annotated training sets. One of their remarkable properties is the ability to transfer knowledge from a large source dataset to a (typically smaller) target dataset. This is usually accomplished through fine-tuning a fixed-size network on new target data. Indeed, virtually every contemporary visual recognition system makes use of fine-tuning to transfer knowledge from ImageNet. In this work, we analyze what components and parameters change during fine-tuning, and discover that increasing model capacity allows for more natural model adaptation through fine-tuning. By making an analogy to developmental learning, we demonstrate that growing a CNN with additional units, either by widening existing layers or deepening the overall network, significantly outperforms classic fine-tuning approaches. But in order to properly grow a network, we show that newly-added units must be appropriately normalized to allow for a pace of learning that is consistent with existing units. We empirically validate our approach on several benchmark datasets, producing state-of-the-art results.",
"title": ""
},
{
"docid": "ff04d4c2b6b39f53e7ddb11d157b9662",
"text": "Chiu proposed a clustering algorithm adjusting the numeric feature weights automatically for k-anonymity implementation and this approach gave a better clustering quality over the traditional generalization and suppression methods. In this paper, we propose an improved weighted-feature clustering algorithm which takes the weight of categorical attributes and the thesis of optimal k-partition into consideration. To show the effectiveness of our method, we do some information loss experiments to compare it with greedy k-member clustering algorithm.",
"title": ""
},
{
"docid": "7b2b69429f821c996c3a0cc605253368",
"text": "Real-time video and image processing is used in a wide variety of applications from video surveillance and traffic management to medical imaging applications. These operations typically require very high computation power. Standard definition NTSC video is digitized at 720x480 or full D1 resolution at 30 frames per second, which results in a 31MHz pixel rate. With multiple adaptive convolution stages to detect or eliminate different features within the image, the filtering operation receives input data at a rate of over 1 giga samples per second. Coupled with new high-resolution standards and multi-channel environments, processing requirements can be even higher. Achieving this level of processing power using programmable DSP requires multiple processors. A single FPGA with an embedded soft processor can deliver the requisite level of computing power more cost-effectively, while simplifying board complexity.",
"title": ""
},
{
"docid": "538406cd49ca1add375e287354908740",
"text": "A broader approach to research in huj man development is proposed that focuses on the pro\\ gressive accommodation, throughout the life span, between the growing human organism and the changing environments in which it actually lives and grows. \\ The latter include not only the immediate settings containing the developing person but also the larger social contexts, both formal and informal, in which these settings are embedded. In terms of method, the approach emphasizes the use of rigorousj^d^igned exp_erjments, both naturalistic and contrived, beginning in the early stages of the research process. The changing relation between person and environment is conceived in systems terms. These systems properties are set forth in a series of propositions, each illustrated by concrete research examples. This article delineates certain scientific limitations in prevailing approaches to research on human development and suggests broader perspectives in theory, method, and substance. The point of departure for this undertaking is the view that, especially in recent decades, research in human development has pursued a divided course, with each direction tangential to genuine scientific progress. To corrupt a contemporary metaphor, we risk being caught between a rock and a soft place. The rock is rigor, and the soft place relevance. As I have argued elsewhere (Bronfenbrenner, 1974; Note 1), the emphasis on rigor has led to experiments that are elegantly designed but often limited in scope. This limitation derives from the fact that many of these experiments involve situations that are unfamiliar, artificial, and short-lived and that call for unusual behaviors that are difficult to generalize to other settings. From this perspective, it can be said that much of contemporary developmental psychology is the science of the strange behavior of children in strange situations with strange adults for the briefest possible periods of time.* Partially in reaction to such shortcomings, other workers have stressed the need for social relevance in research, but often with indifference to or open rejection of rigor. In its more extreme manifestations, this trend has taken the form of excluding the scientists themselves from the research process. For example, one major foundation has recently stated as its new policy that, henceforth, grants for research will be awarded only to persons who are themselves the victims of social injusticeA Other, less radical expressions of this trend in-1 volve reliance on existential approaches in which 1 \"experience\" takes the place of observation and I analysis is foregone in favor of a more personalized I and direct \"understanding\" gained through inti\\ mate involvement in the field situation. More, N. common, and more scientifically defensible, is an /\" emphasis on naturalistic observation, but with the / stipulation that it be unguided by any hypotheses i formulated in advance and uncontaminated by V structured experimental designs imposed prior to /",
"title": ""
},
{
"docid": "609c3a75308eb951079373feb88432ae",
"text": "We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets. DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie one from Wikipedia and the other from IMDb written by two different authors. We asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize answers from the other version. This unique characteristic of DuoRC where questions and answers are created from different versions of a document narrating the same underlying story, ensures by design, that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version. Further, since the two versions have different levels of plot detail, narration style, vocabulary, etc., answering questions from the second version requires deeper language understanding and incorporating external background knowledge. Additionally, the narrative style of passages arising from movie plots (as opposed to typical descriptive passages in existing datasets) exhibits the need to perform complex reasoning over events across multiple sentences. Indeed, we observe that state-ofthe-art neural RC models which have achieved near human performance on the SQuAD dataset (Rajpurkar et al., 2016b), even when coupled with traditional NLP techniques to address the challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42% on DuoRC v/s 86% on SQuAD dataset). This opens up several interesting research avenues wherein DuoRC could complement other RC datasets to explore novel neural approaches for studying language understanding.",
"title": ""
},
{
"docid": "5ddbaa58635d706215ae3d61fe13e46c",
"text": "Recent years have seen growing interest in the problem of sup er-resolution restoration of video sequences. Whereas in the traditional single image re storation problem only a single input image is available for processing, the task of reconst ructing super-resolution images from multiple undersampled and degraded images can take adv antage of the additional spatiotemporal data available in the image sequence. In particula r, camera and scene motion lead to frames in the source video sequence containing similar, b ut not identical information. The additional information available in these frames make poss ible reconstruction of visually superior frames at higher resolution than that of the original d ta. In this paper we review the current state of the art and identify promising directions f or future research. The authors are with the Laboratory for Image and Signal Analysis (LIS A), University of Notre Dame, Notre Dame, IN 46556. E-mail: [email protected] .",
"title": ""
},
{
"docid": "af928cd35b6b33ce1cddbf566f63e607",
"text": "Machine Learning has been the quintessential solution for many AI problems, but learning is still heavily dependent on the specific training data. Some learning models can be incorporated with a prior knowledge in the Bayesian set up, but these learning models do not have the ability to access any organised world knowledge on demand. In this work, we propose to enhance learning models with world knowledge in the form of Knowledge Graph (KG) fact triples for Natural Language Processing (NLP) tasks. Our aim is to develop a deep learning model that can extract relevant prior support facts from knowledge graphs depending on the task using attention mechanism. We introduce a convolution-based model for learning representations of knowledge graph entity and relation clusters in order to reduce the attention space. We show that the proposed method is highly scalable to the amount of prior information that has to be processed and can be applied to any generic NLP task. Using this method we show significant improvement in performance for text classification with News20, DBPedia datasets and natural language inference with Stanford Natural Language Inference (SNLI) dataset. We also demonstrate that a deep learning model can be trained well with substantially less amount of labeled training data, when it has access to organised world knowledge in the form of knowledge graph.",
"title": ""
},
{
"docid": "bb5092ba6da834b3c5ebd8483ab5e9f0",
"text": "Wireless Sensor Networks (WSNs) are a promising technology with applications in many areas such as environment monitoring, agriculture, the military field or health-care, to name but a few. Unfortunately, the wireless connectivity of the sensors opens doors to many security threats, and therefore, cryptographic solutions must be included on-board these devices and preferably in their design phase. In this vein, Random Number Generators (RNGs) play a critical role in security solutions such as authentication protocols or key-generation algorithms. In this article is proposed an avant-garde proposal based on the cardiac signal generator we carry with us (our heart), which can be recorded with medical or even low-cost sensors with wireless connectivity. In particular, for the extraction of random bits, a multi-level decomposition has been performed by wavelet analysis. The proposal has been tested with one of the largest and most publicly available datasets of electrocardiogram signals (202 subjects and 24 h of recording time). Regarding the assessment, the proposed True Random Number Generator (TRNG) has been tested with the most demanding batteries of statistical tests (ENT, DIEHARDERand NIST), and this has been completed with a bias, distinctiveness and performance analysis. From the analysis conducted, it can be concluded that the output stream of our proposed TRNG behaves as a random variable and is suitable for securing WSNs.",
"title": ""
},
{
"docid": "fd5c5ff7c97b9d6b6bfabca14631b423",
"text": "The composition and activity of the gut microbiota codevelop with the host from birth and is subject to a complex interplay that depends on the host genome, nutrition, and life-style. The gut microbiota is involved in the regulation of multiple host metabolic pathways, giving rise to interactive host-microbiota metabolic, signaling, and immune-inflammatory axes that physiologically connect the gut, liver, muscle, and brain. A deeper understanding of these axes is a prerequisite for optimizing therapeutic strategies to manipulate the gut microbiota to combat disease and improve health.",
"title": ""
},
{
"docid": "e38f369fb206e1a8034ce00a0ec25869",
"text": "A large body of research work and efforts have been focused on detecting fake news and building online fact-check systems in order to debunk fake news as soon as possible. Despite the existence of these systems, fake news is still wildly shared by online users. It indicates that these systems may not be fully utilized. After detecting fake news, what is the next step to stop people from sharing it? How can we improve the utilization of these fact-check systems? To fill this gap, in this paper, we (i) collect and analyze online users called guardians, who correct misinformation and fake news in online discussions by referring fact-checking URLs; and (ii) propose a novel fact-checking URL recommendation model to encourage the guardians to engage more in fact-checking activities. We found that the guardians usually took less than one day to reply to claims in online conversations and took another day to spread verified information to hundreds of millions of followers. Our proposed recommendation model outperformed four state-of-the-art models by 11%~33%. Our source code and dataset are available at http://web.cs.wpi.edu/~kmlee/data/gau.html.",
"title": ""
},
{
"docid": "71723d953f1f4ace7c2501fd2c4e5a9f",
"text": "Among all the unique characteristics of a human being, handwriting carries the richest information to gain the insights into the physical, mental and emotional state of the writer. Graphology is the art of studying and analysing handwriting, a scientific method used to determine a person’s personality by evaluating various features from the handwriting. The prime features of handwriting such as the page margins, the slant of the alphabets, the baseline etc. can tell a lot about the individual. To make this method more efficient and reliable, introduction of machines to perform the feature extraction and mapping to various personality traits can be done. This compliments the graphologists, and also increases the speed of analysing handwritten samples. Various approaches can be used for this type of computer aided graphology. In this paper, a novel approach of machine learning technique to implement the automated handwriting analysis tool is discussed.",
"title": ""
},
{
"docid": "3940ccc6f409140582680de1fdc0f610",
"text": "Fermentation of food components by microbes occurs both during certain food production processes and in the gastro-intestinal tract. In these processes specific compounds are produced that originate from either biotransformation reactions or biosynthesis, and that can affect the health of the consumer. In this review, we summarize recent advances highlighting the potential to improve the nutritional status of a fermented food by rational choice of food-fermenting microbes. The vast numbers of microbes residing in the human gut, the gut microbiota, also give rise to a broad array of health-active molecules. Diet and functional foods are important modulators of the gut microbiota activity that can be applied to improve host health. A truly multidisciplinary approach is required to increase our understanding of the molecular mechanisms underlying health beneficial effects that arise from the interaction of diet, microbes and the human body.",
"title": ""
},
{
"docid": "eb4284f45dfe66e4195de12d13f2decc",
"text": "An entry of X is denoted by Xi1,...,id where each index iμ ∈ {1, . . . , nμ} refers to the μth mode of the tensor for μ = 1, . . . , d. For simplicity, we will assume that X has real entries, but it is of course possible to define complex tensors or, more generally, tensors over arbitrary fields. A wide variety of applications lead to problems where the data or the desired solution can be represented by a tensor. In this survey, we will focus on tensors that are induced by the discretization of a multivariate function; we refer to the survey [169] and to the books [175, 241] for the treatment of tensors containing observed data. The simplest way a given multivariate function f(x1, x2, . . . , xd) on a tensor product domain Ω = [0, 1] leads to a tensor is by sampling f on a tensor grid. In this case, each entry of the tensor contains the function value at the corresponding position in the grid. The function f itself may, for example, represent the solution to a high-dimensional partial differential equation (PDE). As the order d increases, the number of entries in X increases exponentially for constant n = n1 = · · · = nd. This so called curse of dimensionality prevents the explicit storage of the entries except for very small values of d. Even for n = 2, storing a tensor of order d = 50 would require 9 petabyte! It is therefore essential to approximate tensors of higher order in a compressed scheme, for example, a low-rank tensor decomposition. Various such decompositions have been developed, see Section 2. An important difference to tensors containing observed data, a tensor X induced by a function is usually not given directly but only as the solution of some algebraic equation, e.g., a linear system or eigenvalue problem. This requires the development of solvers for such equations working within the compressed storage scheme. Such algorithms are discussed in Section 3. The range of applications of low-rank tensor techniques is quickly expanding. For example, they have been used for addressing:",
"title": ""
},
{
"docid": "2c5e280525168d71d1a48fec047b5a23",
"text": "This paper presents the implementation of four channel Electromyography (EMG) signal acquisition system for acquiring the EMG signal of the lower limb muscles during ankle joint movements. Furthermore, some post processing and statistical analysis for the recorded signal were presented. Four channels were implemented using instrumentation amplifier (INA114) for pre-amplification stage then the amplified signal subjected to the band pass filter to eliminate the unwanted signals. Operational amplifier (OPA2604) was involved for the main amplification stage to get the output signal in volts. The EMG signals were detected during movement of the ankle joint of a healthy subject. Then the signal was sampled at the rate of 2 kHz using NI6009 DAQ and Labview used for displaying and storing the acquired signal. For EMG temporal representation, mean absolute value (MAV) analysis algorithm is used to investigate the level of the muscles activity. This data will be used in future as a control input signal to drive the ankle joint exoskeleton robot.",
"title": ""
},
{
"docid": "1ee444fda98b312b0462786f5420f359",
"text": "After years of banning consumer devices (e.g., iPads and iPhone) and applications (e.g., DropBox, Evernote, iTunes) organizations are allowing employees to use their consumer tools in the workplace. This IT consumerization phenomenon will have serious consequences on IT departments which have historically valued control, security, standardization and support (Harris et al. 2012). Based on case studies of three organizations in different stages of embracing IT consumerization, this study identifies the conflicts IT consumerization creates for IT departments. All three organizations experienced similar goal and behavior conflicts, while identity conflict varied depending upon the organizations’ stage implementing consumer tools (e.g., embryonic, initiating or institutionalized). Theoretically, this study advances IT consumerization research by applying a role conflict perspective to understand consumerization’s impact on the IT department.",
"title": ""
},
{
"docid": "fee574207e3985ea3c697f831069fa8b",
"text": "This paper focuses on the utilization of wireless networkin g in the robotics domain. Many researchers have already equipped their robot s with wireless communication capabilities, stimulated by the observation that multi-robot systems tend to have several advantages over their single-robot counterpa r s. Typically, this integration of wireless communication is tackled in a quite pragmat ic manner, only a few authors presented novel Robotic Ad Hoc Network (RANET) prot oc ls that were designed specifically with robotic use cases in mind. This is in harp contrast with the domain of vehicular ad hoc networks (VANET). This observati on is the starting point of this paper. If the results of previous efforts focusing on VANET protocols could be reused in the RANET domain, this could lead to rapid progre ss in the field of networked robots. To investigate this possibility, this paper rovides a thorough overview of the related work in the domain of robotic and vehicular ad h oc networks. Based on this information, an exhaustive list of requirements is d efined for both types. It is concluded that the most significant difference lies in the fact that VANET protocols are oriented towards low throughput messaging, while R ANET protocols have to support high throughput media streaming as well. Althoug h not always with equal importance, all other defined requirements are valid for bot h protocols. This leads to the conclusion that cross-fertilization between them is an appealing approach for future RANET research. To support such developments, this pap er concludes with the definition of an appropriate working plan.",
"title": ""
},
{
"docid": "38e9aa4644edcffe87dd5ae497e99bbe",
"text": "Hashtags, created by social network users, have gained a huge popularity in recent years. As a kind of metatag for organizing information, hashtags in online social networks, especially in Instagram, have greatly facilitated users' interactions. In recent years, academia starts to use hashtags to reshape our understandings on how users interact with each other. #like4like is one of the most popular hashtags in Instagram with more than 290 million photos appended with it, when a publisher uses #like4like in one photo, it means that he will like back photos of those who like this photo. Different from other hashtags, #like4like implies an interaction between a photo's publisher and a user who likes this photo, and both of them aim to attract likes in Instagram. In this paper, we study whether #like4like indeed serves the purpose it is created for, i.e., will #like4like provoke more likes? We first perform a general analysis of #like4like with 1.8 million photos collected from Instagram, and discover that its quantity has dramatically increased by 1,300 times from 2012 to 2016. Then, we study whether #like4like will attract likes for photo publishers; results show that it is not #like4like but actually photo contents attract more likes, and the lifespan of a #like4like photo is quite limited. In the end, we study whether users who like #like4like photos will receive likes from #like4like publishers. However, results show that more than 90% of the publishers do not keep their promises, i.e., they will not like back others who like their #like4like photos; and for those who keep their promises, the photos which they like back are often randomly selected.",
"title": ""
},
{
"docid": "a33147bd85b4ecf4f2292e4406abfc26",
"text": "Accident detection systems help reduce fatalities stemming from car accidents by decreasing the response time of emergency responders. Smartphones and their onboard sensors (such as GPS receivers and accelerometers) are promising platforms for constructing such systems. This paper provides three contributions to the study of using smartphone-based accident detection systems. First, we describe solutions to key issues associated with detecting traffic accidents, such as preventing false positives by utilizing mobile context information and polling onboard sensors to detect large accelerations. Second, we present the architecture of our prototype smartphone-based accident detection system and empirically analyze its ability to resist false positives as well as its capabilities for accident reconstruction. Third, we discuss how smartphone-based accident detection can reduce overall traffic congestion and increase the preparedness of emergency responders.",
"title": ""
},
{
"docid": "33e6abc5ed78316cc03dae8ba5a0bfc8",
"text": "In this paper, we present a deep learning architecture which addresses the problem of 3D semantic segmentation of unstructured point clouds. Compared to previous work, we introduce grouping techniques which define point neighborhoods in the initial world space and the learned feature space. Neighborhoods are important as they allow to compute local or global point features depending on the spatial extend of the neighborhood. Additionally, we incorporate dedicated loss functions to further structure the learned point feature space: the pairwise distance loss and the centroid loss. We show how to apply these mechanisms to the task of 3D semantic segmentation of point clouds and report state-of-the-art performance on indoor and outdoor datasets. ar X iv :1 81 0. 01 15 1v 2 [ cs .C V ] 8 D ec 2 01 8 2 F. Engelmann et al.",
"title": ""
}
] |
scidocsrr
|
3169f34ce0190464a4629f9d1778019a
|
Robust GaN MMIC Chipset for T / R Module Front-End Integration
|
[
{
"docid": "e870d5f8daac0d13bdcffcaec4ba04c1",
"text": "In this paper the design, fabrication and test of X-band and 2-18 GHz wideband high power SPDT MMIC switches in microstrip GaN technology are presented. Such switches have demonstrated state-of-the-art performances. In particular the X-band switch exhibits 1 dB insertion loss, better than 37 dB isolation and a power handling capability at 9 GHz of better than 39 dBm at 1 dB insertion loss compression point; the wideband switch has an insertion loss lower than 2.2 dB, better than 25 dB isolation and a power handling capability of better than 38 dBm in the entire bandwidth.",
"title": ""
},
{
"docid": "f4d1a3530cb84b2efa9d5a2a63e66d2f",
"text": "Gallium-Nitride technology is known for its high power density and power amplifier designs, but is also very well suited to realize robust receiver components. This paper presents the design and measurement of a robust AlGaN/GaN Low Noise Amplifier and Transmit/Receive Switch MMIC. Two versions of both MMICs have been designed in the Alcatel-Thales III-V lab AlGaN/GaN microstrip technology. One chipset version operates at X-band and the second also shows wideband performance. Input power handling of >46 dBm for the switch and >41 dBm for the LNA have been measured.",
"title": ""
},
{
"docid": "f9294e7f23321e768d7c586e6e28d424",
"text": "In this paper a first iteration X-band T/R module based on a GaN-HEMT MMIC front-end chip-set, comprising a power amplifier, robust low-noise amplifier and power switch will be presented. Even though ultimate T/R module performance cannot be achieved with current GaN-HEMT technological maturity the impact that this technology can have at systems level in terms of performance/cost trade-off will be illustrated by means of a preliminary innovative module architecture which foresees the elimination of more traditional T7R module components such as ferrite circulator and limiter for front-end signal routing and protection.",
"title": ""
}
] |
[
{
"docid": "624ddac45b110bc809db198d60f3cf97",
"text": "Poisson regression models provide a standard framework for the analysis of count data. In practice, however, count data are often overdispersed relative to the Poisson distribution. One frequent manifestation of overdispersion is that the incidence of zero counts is greater than expected for the Poisson distribution and this is of interest because zero counts frequently have special status. For example, in counting disease lesions on plants, a plant may have no lesions either because it is resistant to the disease, or simply because no disease spores have landed on it. This is the distinction between structural zeros, which are inevitable, and sampling zeros, which occur by chance. In recent years there has been considerable interest in models for count data that allow for excess zeros, particularly in the econometric literature. These models complement more conventional models for overdispersion that concentrate on modelling the variance-mean relationship correctly. Application areas are diverse and have included manufacturing defects (Lambert, 1992), patent applications (Crepon & Duguet, 1997), road safety (Miaou, 1994), species abundance (Welsh et al., 1996; Faddy, 1998), medical consultations",
"title": ""
},
{
"docid": "140a9255e8ee104552724827035ee10a",
"text": "Our goal is to design architectures that retain the groundbreaking performance of CNNs for landmark localization and at the same time are lightweight, compact and suitable for applications with limited computational resources. To this end, we make the following contributions: (a) we are the first to study the effect of neural network binarization on localization tasks, namely human pose estimation and face alignment. We exhaustively evaluate various design choices, identify performance bottlenecks, and more importantly propose multiple orthogonal ways to boost performance. (b) Based on our analysis, we propose a novel hierarchical, parallel and multi-scale residual architecture that yields large performance improvement over the standard bottleneck block while having the same number of parameters, thus bridging the gap between the original network and its binarized counterpart. (c) We perform a large number of ablation studies that shed light on the properties and the performance of the proposed block. (d) We present results for experiments on the most challenging datasets for human pose estimation and face alignment, reporting in many cases state-of-the-art performance. Code can be downloaded from https://www.adrianbulat.com/binary-cnn-landmarks",
"title": ""
},
{
"docid": "df1e281417844a0641c3b89659e18102",
"text": "In this paper we present a novel method to increase the spatial resolution of depth images. We combine a deep fully convolutional network with a non-local variational method in a deep primal-dual network. The joint network computes a noise-free, highresolution estimate from a noisy, low-resolution input depth map. Additionally, a highresolution intensity image is used to guide the reconstruction in the network. By unrolling the optimization steps of a first-order primal-dual algorithm and formulating it as a network, we can train our joint method end-to-end. This not only enables us to learn the weights of the fully convolutional network, but also to optimize all parameters of the variational method and its optimization procedure. The training of such a deep network requires a large dataset for supervision. Therefore, we generate high-quality depth maps and corresponding color images with a physically based renderer. In an exhaustive evaluation we show that our method outperforms the state-of-the-art on multiple benchmarks.",
"title": ""
},
{
"docid": "99bc521c438fa804fd43d1755ba0d900",
"text": "The revolutionary potential of massive open online courses (MOOCs) has been met with much skepticism, particularly in terms of the quality of learning offered. Believing that a focus on learning is more important than a focus on course completion rates, this position paper presents a pedagogical assessment of MOOCs using Chickering and Gamson's Seven Principles of Good Practice in Undergraduate Education and Bloom's taxonomy, based on the author's personal experience as a learner in four xMOOCs. Although most xMOOCs have similar characteristics, the author shows that they are not all offered in exactly the same way, and some provide more sound pedagogy that develops higher order thinking, whereas others do not. The author uses this evaluation, as well as reviews of other xMOOCs in the literature, to glean some good pedagogical practices in xMOOCs and areas for improvement.",
"title": ""
},
{
"docid": "3564e82cf5c67e76ec6c7232dd8ed6aa",
"text": "The past few years have witnessed an increase in the development of wearable sensors for health monitoring systems. This increase has been due to several factors such as development in sensor technology as well as directed efforts on political and stakeholder levels to promote projects which address the need for providing new methods for care given increasing challenges with an aging population. An important aspect of study in such system is how the data is treated and processed. This paper provides a recent review of the latest methods and algorithms used to analyze data from wearable sensors used for physiological monitoring of vital signs in healthcare services. In particular, the paper outlines the more common data mining tasks that have been applied such as anomaly detection, prediction and decision making when considering in particular continuous time series measurements. Moreover, the paper further details the suitability of particular data mining and machine learning methods used to process the physiological data and provides an overview of the properties of the data sets used in experimental validation. Finally, based on this literature review, a number of key challenges have been outlined for data mining methods in health monitoring systems.",
"title": ""
},
{
"docid": "2fba3b2ae27e1389557794673137480d",
"text": "The paper provides an OWL ontology for legal cases with an instantiation of the legal case Popov v. Hayashi. The ontology makes explicit the conceptual knowledge of the legal case domain, supports reasoning about the domain, and can be used to annotate the text of cases, which in turn can be used to populate the ontology. A populated ontology is a case base which can be used for information retrieval, information extraction, and case based reasoning. The ontology contains not only elements for indexing the case (e.g. the parties, jurisdiction, and date), but as well elements used to reason to a decision such as argument schemes and the components input to the schemes. We use the Protégé ontology editor and knowledge acquisition system, current guidelines for ontology development, and tools for visual and linguistic presentation of the ontology.",
"title": ""
},
{
"docid": "f4166e4121dbd6f6ab209e6d99aac63f",
"text": "In this paper, we propose several novel deep learning methods for object saliency detection based on the powerful convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify an input image based on the pixel-wise gradients to reduce a cost function measuring the class-specific objectness of the image. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. The discrepancy between the modified image and the original one may be used as a saliency map for the image. Moreover, we have further proposed several new training methods to learn saliency-specific convolutional nets for object saliency detection, in order to leverage the available pixel-wise segmentation information. Our methods are extremely computationally efficient (processing 20-40 images per second in one GPU). In this work, we use the computed saliency maps for image segmentation. Experimental results on two benchmark tasks, namely Microsoft COCO and Pascal VOC 2012, have shown that our proposed methods can generate high-quality salience maps, clearly outperforming many existing methods. In particular, our approaches excel in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and/or very small salient objects.",
"title": ""
},
{
"docid": "a1d58b3a9628dc99edf53c1112dc99b8",
"text": "Multiple criteria decision-making (MCDM) research has developed rapidly and has become a main area of research for dealing with complex decision problems. The purpose of the paper is to explore the performance evaluation model. This paper develops an evaluation model based on the fuzzy analytic hierarchy process and the technique for order performance by similarity to ideal solution, fuzzy TOPSIS, to help the industrial practitioners for the performance evaluation in a fuzzy environment where the vagueness and subjectivity are handled with linguistic values parameterized by triangular fuzzy numbers. The proposed method enables decision analysts to better understand the complete evaluation process and provide a more accurate, effective, and systematic decision support tool. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "69aebacd458b0cce9ea8a0a1dd13b4a5",
"text": "Given a heterogeneous network, with nodes of di↵erent types – e.g., products, users and sellers from an online recommendation site like Amazon – and labels for a few nodes (‘honest’, ‘suspicious’, etc), can we find a closed formula for Belief Propagation (BP), exact or approximate? Can we say whether it will converge? BP, traditionally an inference algorithm for graphical models, exploits so-called “network e↵ects” to perform graph classification tasks when labels for a subset of nodes are provided; and it has been successful in numerous settings like fraudulent entity detection in online retailers and classification in social networks. However, it does not have a closed-form nor does it provide convergence guarantees in general. We propose ZooBP, a method to perform fast BP on undirected heterogeneous graphs with provable convergence guarantees. ZooBP has the following advantages: (1) Generality: It works on heterogeneous graphs with multiple types of nodes and edges; (2) Closed-form solution: ZooBP gives a closed-form solution as well as convergence guarantees; (3) Scalability: ZooBP is linear on the graph size and is up to 600⇥ faster than BP, running on graphs with 3.3 million edges in a few seconds. (4) E↵ectiveness: Applied on real data (a Flipkart e-commerce network with users, products and sellers), ZooBP identifies fraudulent users with a near-perfect precision of 92.3 % over the top 300 results.",
"title": ""
},
{
"docid": "d93abfdc3bc20a23e533f3ad2e30b9c9",
"text": "Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.",
"title": ""
},
{
"docid": "53e9a7a90c0db7d43e884d04fda5b6ea",
"text": "Data visualization is a common and effective technique for data exploration. However, for complex data, it is infeasible for an analyst to manually generate and browse all possible visualizations for insights. This observation motivated the need for automated solutions that can effectively recommend such visualizations. The main idea underlying those solutions is to evaluate the utility of all possible visualizations and then recommend the top-k visualizations. This process incurs high data processing cost, that is further aggravated by the presence of numerical dimensional attributes. To address that challenge, we propose novel view recommendation schemes, which incorporate a hybrid multi-objective utility function that captures the impact of numerical dimension attributes. Our first scheme, Multi-Objective View Recommendation for Data Exploration (MuVE), adopts an incremental evaluation of our multi-objective utility function, which allows pruning of a large number of low-utility views and avoids unnecessary objective evaluations. Our second scheme, upper MuVE (uMuVE), further improves the pruning power by setting the upper bounds on the utility of views and allowing interleaved processing of views, at the expense of increased memory usage. Finally, our third scheme, Memory-aware uMuVE (MuMuVE), provides pruning power close to that of uMuVE, while keeping memory usage within a pre-specified limit.",
"title": ""
},
{
"docid": "c9f4af65710813850c7c5438368fc07c",
"text": "Due to the complex system context of embedded-software applications, defects can cause life-threatening situations, delays can create huge costs, and insufficient productivity can impact entire economies. Providing better estimates, setting objectives, and identifying critical hot spots in embedded-software engineering requires adequate benchmarking data.",
"title": ""
},
{
"docid": "309fef7105de05da3a0e987c1dc1c3cc",
"text": "Flying Ad hoc Network (FANET) is an infrastructure-less multi-hop radio ad hoc network in which Unmanned Aerial Vehicles (UAVs) and Ground Control Station (GCS) collaborates to forward data traffic. Compared to the standard Mobile Ad hoc NETworks (MANETs), the FANET architecture has some specific features (3D mobility, low UAV density, intermittent network connectivity) that bring challenges to the communication protocol design. Such routing protocol must provide safety by finding an accurate and reliable route between UAVs. This safety can be obtained through the use of agile method during software based routing protocol development (for instance the use of Model Driven Development) by mapping each FANET safety requirement into the routing design process. This process must be completed with a sequential safety validation testing with formal verification tools, standardized simulator (by using real simulation environment) and real-world experiments. In this paper, we considered FANET communication safety by presenting design methodologies and evaluations of FANET routing protocols. We use the LARISSA architecture to guarantee the efficiency and accuracy of the whole system. We also use the model driven development methodology to provide model and code consistency through the use of formal verification tools. To complete the FANET safety validation, OMNeT++ simulations (using real UAVs mobility traces) and real FANET outdoor experiments have been carried out. We confront both results to evaluate routing protocol performances and conclude about its safety consideration.",
"title": ""
},
{
"docid": "5a69b2301b95976ee29138092fc3bb1a",
"text": "We present a new open source, extensible and flexible software platform for Bayesian evolutionary analysis called BEAST 2. This software platform is a re-design of the popular BEAST 1 platform to correct structural deficiencies that became evident as the BEAST 1 software evolved. Key among those deficiencies was the lack of post-deployment extensibility. BEAST 2 now has a fully developed package management system that allows third party developers to write additional functionality that can be directly installed to the BEAST 2 analysis platform via a package manager without requiring a new software release of the platform. This package architecture is showcased with a number of recently published new models encompassing birth-death-sampling tree priors, phylodynamics and model averaging for substitution models and site partitioning. A second major improvement is the ability to read/write the entire state of the MCMC chain to/from disk allowing it to be easily shared between multiple instances of the BEAST software. This facilitates checkpointing and better support for multi-processor and high-end computing extensions. Finally, the functionality in new packages can be easily added to the user interface (BEAUti 2) by a simple XML template-based mechanism because BEAST 2 has been re-designed to provide greater integration between the analysis engine and the user interface so that, for example BEAST and BEAUti use exactly the same XML file format.",
"title": ""
},
{
"docid": "3a050e121a1caa5b86c99965475b758d",
"text": "A new breed of computer programs called socialbots are now online, and they can be used to breach users’ privacy, spread misinformation to bias the public opinion, and compromise the graph of a targeted social network [1, 2]. A socialbot controls a fictitious user profile and has the ability to execute basic online activities (e.g., posting a message, sending a connection request). What makes a socialbot different from other self-declared bots (e.g., bots posting weather forecasts in Twitter) is that it is designed to infiltrate online communities by passing itself off as a human being. Socialbots can be used to manipulate the graph of a targeted social network in order to establish a centralized or influential social position in it. This position can be then exploited to mount DDoS attacks, promote products, or to spread propaganda in a viral way. For example, Ratkiewicz et al. [2] described the use of Twitter bots to spread misinformation in the run-up to the US political elections. As the socialbots infiltrate a targeted social network, they also harvest valuable users’ data which is useful for online profiling and large-scale spam campaigns. In fact, a new report showed that spammers are turning to online social networking platforms for distributing their messages [7], which explains the dramatic drop in the world-wide email spam during the recent months [6]. This gave rise to black-market businesses that offer multi-featured socialbots for as high as $29 per bot [3]. Many techniques have been proposed that try to identify socialbots based on their likely unordinary behavior (see [4] for an example on Twitter). In this research, we take a proactive step and investigate the feasibility of operating an organized army of socialbots which we call a Socialbot Network (SbN). A SbN is a group of socialbots that collaborate in infiltrating social networks under the orchestration of one or many “master” bots. We study the security and privacy implications of operating such a SbN on a large scale. In particular, we answer questions about the types of collaboration these socialbots can utilize, the required infrastructure, the network-wide observations that can be exploited to improve the potential infiltration, and the economics of operating a SbN on a large scale. Finally, we present a set of challenges that future social network security systems have to overcome in order to mitigate the potential threat of a SbN.",
"title": ""
},
{
"docid": "de6d83fd854d92e83a59191f48921e0b",
"text": "The automatic detection of objects that are abandoned or removed in a video scene is an interesting area of computer vision, with key applications in video surveillance. Forgotten or stolen luggage in train and airport stations and irregularly parked vehicles are examples that concern significant issues, such as the fight against terrorism and crime, and public safety. Both issues involve the basic task of detecting static regions in the scene. We address this problem by introducing a model-based framework to segment static foreground objects against moving foreground objects in single view sequences taken from stationary cameras. An image sequence model, obtained by learning in a self-organizing neural network image sequence variations, seen as trajectories of pixels in time, is adopted within the model-based framework. Experimental results on real video sequences and comparisons with existing approaches show the accuracy of the proposed stopped object detection approach.",
"title": ""
},
{
"docid": "a09248f7c017c532a3a0a580be14ba20",
"text": "In the past ten years, the software aging phenomenon has been systematically researched, and recognized by both academic, and industry communities as an important obstacle to achieving dependable software systems. One of its main effects is the depletion of operating system resources, causing system performance degradation or crash/hang failures in running applications. When conducting experimental studies to evaluate the operational reliability of systems suffering from software aging, long periods of runtime are required to observe system failures. Focusing on this problem, we present a systematic approach to accelerate the software aging manifestation to reduce the experimentation time, and to estimate the lifetime distribution of the investigated system. First, we introduce the concept of ¿aging factor¿ that offers a fine control of the aging effects at the experimental level. The aging factors are estimated via sensitivity analyses based on the statistical design of experiments. Aging factors are then used together with the method of accelerated degradation test to estimate the lifetime distribution of the system under test at various stress levels. This approach requires us to estimate a relationship model between stress levels and aging degradation. Such models are called stress-accelerated aging relationships. Finally, the estimated relationship models enable us to estimate the lifetime distribution under use condition. The proposed approach is used in estimating the lifetime distribution of a web server with software aging symptoms. The main result is the reduction of the experimental time by a factor close to 685 in comparison with experiments executed without the use of our technique.",
"title": ""
},
{
"docid": "e26970690d94c17afbace7d24a8bd88c",
"text": "This paper addresses the millimeter-wave antenna design aspect of the future 5G wireless systems. The paper reviews the objectives and requirements of millimeter-wave antennas for 5G. Recent advances in mm-wave antenna are reported and design guidelines are discussed. In particular, four different designs are identified from the recent literature based on their attractive characteristics that support 5G requirements and applications. The first design employs a dual-band slotted patch antenna operating at 28 GHz and 38 GHz. The antenna has circular polarization and is excited by a single-feed microstrip line. The present design is desirable for high-gain antenna array implementation in the mm-wave band, in order to compensate for the mm-wave propagation loss. The second design that is presented employs a compact planar inverted-F antenna (PIFA) with single layer dielectric load of a superstrate to enhance the gain and achieve a wide impedance bandwidth resulting in high efficiency. The third design that operates in the mm-wave band is a T-Shaped patch antenna. The proposed antenna a wideband range from (26.5 GHz-40 GHz) of the Ka band. The PFT substrate was used as it offers some advantages; low cost, high flexibility, harmless to human body and resistive towards environmental effects. The last mm-wave antenna design presented employs two MEMO arrays each composed of 2×2 antenna elements. The two MIMO array configurations are spatially orthogonal to each other which results in polarization diversity.",
"title": ""
},
{
"docid": "e048a2f0b62da0e358fde7b65e967ca7",
"text": "Historical text presents numerous challenges for contemporary different techniques, e.g. information retrieval, OCR and POS tagging. In particular, the absence of consistent orthographic conventions in historical text presents difficulties for any system which requires reference to a fixed lexicon accessed by orthographic form. For example, language modeling or retrieval engine for historical text which is produced by OCR systems, where the spelling of words often differ in various way, e.g. one word might have different spellings evolved over time. It is very important to aid those techniques with the rules for automatic mapping of historical wordforms. In this paper, we propose a new technique to model the target modern language by means of a recurrent neural network with long-short term memory architecture. Because the network is recurrent, the considered context is not limited to a fixed size especially due to memory cells which are designed to deal with long-term dependencies. In the set of experiments conducted on the Luther bible database and transform wordforms from Early New High German (ENHG) 14th - 16th centuries to the corresponding modern wordforms in New High German (NHG). We compare our proposed supervised model LSTM to various methods for computing word alignments using statistical, heuristic models. Our new proposed LSTM outperforms the other three state-of-the-art methods. The evaluation shows the accuracy of our model on the known wordforms is 93.90% and on the unknown wordforms is 87.95%, while the accuracy of the existing state-of-the-art combined approach of the wordlist-based and rule-based normalization models is 92.93% for known and 76.88% for unknown tokens. Our proposed LSTM model outperforms on normalizing the modern wordform to historical wordform. The performance on seen tokens is 93.4%, while for unknown tokens is 89.17%.",
"title": ""
},
{
"docid": "f7ef3c104fe6c5f082e7dd060a82c03e",
"text": "Research about the artificial muscle made of fishing lines or sewing threads, called the twisted and coiled polymer actuator (abbreviated as TCA in this paper) has collected many interests, recently. Since TCA has a specific power surpassing the human skeletal muscle theoretically, it is expected to be a new generation of the artificial muscle actuator. In order that the TCA is utilized as a useful actuator, this paper introduces the fabrication and the modeling of the temperature-controllable TCA. With an embedded micro thermistor, the TCA is able to measure temperature directly, and feedback control is realized. The safe range of the force and temperature for the continuous use of the TCA was identified through experiments, and the closed-loop temperature control is successfully performed without the breakage of TCA.",
"title": ""
}
] |
scidocsrr
|
3ee9b35fae07c6267bd512e7df10f572
|
Fast and Scalable Expansion of Natural Language Understanding Functionality for Intelligent Agents
|
[
{
"docid": "c7059c650323a08ac7453ad4185e6c4f",
"text": "Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain. It is particularly important to neural networks, which are very likely to be overfitting. In some fields like image processing, many studies have shown the effectiveness of neural network-based transfer learning. For neural NLP, however, existing studies have only casually applied transfer learning, and conclusions are inconsistent. In this paper, we conduct systematic case studies and provide an illuminating picture on the transferability of neural networks in NLP.1",
"title": ""
}
] |
[
{
"docid": "c0d4538f34499d19f14c3adba8527280",
"text": "OBJECTIVE\nTo consider the use of the diagnostic category 'complex posttraumatic stress disorder' (c-PTSD) as detailed in the forthcoming ICD-11 classification system as a less stigmatising, more clinically useful term, instead of the current DSM-5 defined condition of 'borderline personality disorder' (BPD).\n\n\nCONCLUSIONS\nTrauma, in its broadest definition, plays a key role in the development of both c-PTSD and BPD. Given this current lack of differentiation between these conditions, and the high stigma faced by people with BPD, it seems reasonable to consider using the diagnostic term 'complex posttraumatic stress disorder' to decrease stigma and provide a trauma-informed approach for BPD patients.",
"title": ""
},
{
"docid": "f0b32c584029cd407fd350ddd9d00e70",
"text": "Irregular and dynamic parallel applications pose significant challenges to achieving scalable performance on large-scale multicore clusters. These applications often require ongoing, dynamic load balancing in order to maintain efficiency. Scalable dynamic load balancing on large clusters is a challenging problem which can be addressed with distributed dynamic load balancing systems. Work stealing is a popular approach to distributed dynamic load balancing; however its performance on large-scale clusters is not well understood. Prior work on work stealing has largely focused on shared memory machines. In this work we investigate the design and scalability of work stealing on modern distributed memory systems. We demonstrate high efficiency and low overhead when scaling to 8,192 processors for three benchmark codes: a producer-consumer benchmark, the unbalanced tree search benchmark, and a multiresolution analysis kernel.",
"title": ""
},
{
"docid": "3ea6de664a7ac43a1602b03b46790f0a",
"text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.",
"title": ""
},
{
"docid": "0c7e7491fbf8506d7a3d11e526b509d3",
"text": "While keystream reuse in stream ciphers and one-time pads has been a well known problem for several decades, the risk to real systems has been underappreciated. Previous techniques have relied on being able to accurately guess words and phrases that appear in one of the plaintext messages, making it far easier to claim that \"an attacker would never be able to do that.\" In this paper, we show how an adversary can automatically recover messages encrypted under the same keystream if only the type of each message is known (e.g. an HTML page in English). Our method, which is related to HMMs, recovers the most probable plaintext of this type by using a statistical language model and a dynamic programming algorithm. It produces up to 99% accuracy on realistic data and can process ciphertexts at 200ms per byte on a $2,000 PC. To further demonstrate the practical effectiveness of the method, we show that our tool can recover documents encrypted by Microsoft Word 2002 [22].",
"title": ""
},
{
"docid": "3ae81a471cce55f5da01aba9653d1bff",
"text": "In Attribute-based Encryption (ABE) scheme, attributes play a very important role. Attributes have been exploited to generate a public key for encrypting data and have been used as an access policy to control users’ access. The access policy can be categorized as either key-policy or ciphertext-policy. The key-policy is the access structure on the user’s private key, and the ciphertext-policy is the access structure on the ciphertext. And the access structure can also be categorized as either monotonic or non-monotonic one. Using ABE schemes can have the advantages: (1) to reduce the communication overhead of the Internet, and (2) to provide a fine-grained access control. In this paper, we survey a basic attribute-based encryption scheme, two various access policy attributebased encryption schemes, and two various access structures, which are analyzed for cloud environments. Finally, we list the comparisons of these schemes by some criteria for cloud environments.",
"title": ""
},
{
"docid": "74c6600ea1027349081c08c687119ee3",
"text": "Segmentation of clitics has been shown to improve accuracy on a variety of Arabic NLP tasks. However, state-of-the-art Arabic word segmenters are either limited to formal Modern Standard Arabic, performing poorly on Arabic text featuring dialectal vocabulary and grammar, or rely on linguistic knowledge that is hand-tuned for each dialect. We extend an existing MSA segmenter with a simple domain adaptation technique and new features in order to segment informal and dialectal Arabic text. Experiments show that our system outperforms existing systems on broadcast news and Egyptian dialect, improving segmentation F1 score on a recently released Egyptian Arabic corpus to 92.09%, compared to 91.60% for another segmenter designed specifically for Egyptian Arabic.",
"title": ""
},
{
"docid": "807564cfc2e90dee21a3efd8dc754ba3",
"text": "The present paper reports two studies designed to test the Dualistic Model of Passion with regard to performance attainment in two fields of expertise. Results from both studies supported the Passion Model. Harmonious passion was shown to be a positive source of activity investment in that it directly predicted deliberate practice (Study 1) and positively predicted mastery goals which in turn positively predicted deliberate practice (Study 2). In turn, deliberate practice had a direct positive impact on performance attainment. Obsessive passion was shown to be a mixed source of activity investment. While it directly predicted deliberate practice (Study 1) and directly predicted mastery goals (which predicted deliberate practice), it also predicted performance-avoidance and performance-approach goals, with the former having a tendency to facilitate performance directly, and the latter to directly negatively impact on performance attainment (Study 2). Finally, harmonious passion was also positively related to subjective well-being (SWB) in both studies, while obsessive passion was either unrelated (Study 1) or negatively related to SWB (Study 2). The conceptual and applied implications of the differential influences of harmonious and obsessive passion in performance are discussed.",
"title": ""
},
{
"docid": "df88873bdef2ad38a7b2157d6c4c2324",
"text": "Software Testing is a challenging activity for many software engineering projects and it is one of the five main technical activity areas of the software engineering lifecycle that still poses substantial challenges. Testing software requires enough resources and budget to complete it successfully. But most of the organizations face the challenges to provide enough resources to test their software in distributed environment, with different loading level. This leads to severe problem when the software deployed into different client environment and varying user load. Cloud computing is a one of the emerging technology which opens new door for software testing. This paper investigates the software testing in cloud platform which includes cloud testing models, recent research work, commercial tools and research issues.",
"title": ""
},
{
"docid": "d261c284cc4c959b525ceae9f7cfb00c",
"text": "Innate lymphoid cells (ILCs) were first described as playing important roles in the development of lymphoid tissues and more recently in the initiation of inflammation at barrier surfaces in response to infection or tissue damage. It has now become apparent that ILCs play more complex roles throughout the duration of immune responses, participating in the transition from innate to adaptive immunity and contributing to chronic inflammation. The proximity of ILCs to epithelial surfaces and their constitutive strategic positioning in other tissues throughout the body ensures that, in spite of their rarity, ILCs are able to regulate immune homeostasis effectively. Dysregulation of ILC function might result in chronic pathologies such as allergies, autoimmunity, and inflammation. A new role for ILCs in the maintenance of metabolic homeostasis has started to emerge, underlining their importance in fundamental physiological processes beyond infection and immunity.",
"title": ""
},
{
"docid": "bb2c1b4b08a25df54fbd46eaca138337",
"text": "The zero-shot paradigm exploits vector-based word representations extracted from text corpora with unsupervised methods to learn general mapping functions from other feature spaces onto word space, where the words associated to the nearest neighbours of the mapped vectors are used as their linguistic labels. We show that the neighbourhoods of the mapped elements are strongly polluted by hubs, vectors that tend to be near a high proportion of items, pushing their correct labels down the neighbour list. After illustrating the problem empirically, we propose a simple method to correct it by taking the proximity distribution of potential neighbours across many mapped vectors into account. We show that this correction leads to consistent improvements in realistic zero-shot experiments in the cross-lingual, image labeling and image retrieval domains.",
"title": ""
},
{
"docid": "df679dcd213842a786c1ad9587c66f77",
"text": "The statistics of professional sports, including players and teams, provide numerous opportunities for research. Cricket is one of the most popular team sports, with billions of fans all over the world. In this thesis, we address two problems related to the One Day International (ODI) format of the game. First, we propose a novel method to predict the winner of ODI cricket matches using a team-composition based approach at the start of the match. Second, we present a method to quantitatively assess the performances of individual players in a match of ODI cricket which incorporates the game situations under which the players performed. The player performances are further used to predict the player of the match award. Players are the fundamental unit of a team. Players of one team work against the players of the opponent team in order to win a match. The strengths and abilities of the players of a team play a key role in deciding the outcome of a match. However, a team changes its composition depending on the match conditions, venue, and opponent team, etc. Therefore, we propose a novel dynamic approach which takes into account the varying strengths of the individual players and reflects the changes in player combinations over time. Our work suggests that the relative team strength between the competing teams forms a distinctive feature for predicting the winner. Modeling the team strength boils down to modeling individual players’ batting and bowling performances, forming the basis of our approach. We use career statistics as well as the recent performances of a player to model him. Using the relative strength of one team versus the other, along with two player-independent features, namely, the toss outcome and the venue of the match, we evaluate multiple supervised machine learning algorithms to predict the winner of the match. We show that, for our approach, the k-Nearest Neighbor (kNN) algorithm yields better results as compared to other classifiers. Players have multiple roles in a game of cricket, predominantly as batsmen and bowlers. Over the generations, statistics such as batting and bowling averages, and strike and economy rates have been used to judge the performance of individual players. These measures, however, do not take into consideration the context of the game in which a player performed across the course of a match. Further, these types of statistics are incapable of comparing the performance of players across different roles. Therefore, we present an approach to quantitatively assess the performances of individual players in a single match of ODI cricket. We have developed a new measure, called the Work Index, which represents the amount of work that is yet to be done by a team to achieve its target. Our approach incorporates game situations and the team strengths to measure the player contributions. This not only helps us in",
"title": ""
},
{
"docid": "e1651c1f329b8caa53e5322be5bf700b",
"text": "Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths in order to promote the learning performance of individual learners. However, most personalized e-learning systems usually neglect to consider if learner ability and the difficulty level of the recommended courseware are matched to each other while performing personalized learning services. Moreover, the problem of concept continuity of learning paths also needs to be considered while implementing personalized curriculum sequencing because smooth learning paths enhance the linked strength between learning concepts. Generally, inappropriate courseware leads to learner cognitive overload or disorientation during learning processes, thus reducing learning performance. Therefore, compared to the freely browsing learning mode without any personalized learning path guidance used in most web-based learning systems, this paper assesses whether the proposed genetic-based personalized e-learning system, which can generate appropriate learning paths according to the incorrect testing responses of an individual learner in a pre-test, provides benefits in terms of learning performance promotion while learning. Based on the results of pre-test, the proposed genetic-based personalized e-learning system can conduct personalized curriculum sequencing through simultaneously considering courseware difficulty level and the concept continuity of learning paths to support web-based learning. Experimental results indicated that applying the proposed genetic-based personalized e-learning system for web-based learning is superior to the freely browsing learning mode because of high quality and concise learning path for individual learners. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fdd01ae46b9c57eada917a6e74796141",
"text": "This paper presents a high-level discussion of dexterity in robotic systems, focusing particularly on manipulation and hands. While it is generally accepted in the robotics community that dexterity is desirable and that end effectors with in-hand manipulation capabilities should be developed, there has been little, if any, formal description of why this is needed, particularly given the increased design and control complexity required. This discussion will overview various definitions of dexterity used in the literature and highlight issues related to specific metrics and quantitative analysis. It will also present arguments regarding why hand dexterity is desirable or necessary, particularly in contrast to the capabilities of a kinematically redundant arm with a simple grasper. Finally, we overview and illustrate the various classes of in-hand manipulation, and review a number of dexterous manipulators that have been previously developed. We believe this work will help to revitalize the dialogue on dexterity in the manipulation community and lead to further formalization of the concepts discussed here.",
"title": ""
},
{
"docid": "1c576cf604526b448f0264f2c39f705a",
"text": "This paper introduces a high-security post-quantum stateless hash-based signature scheme that signs hundreds of messages per second on a modern 4-core 3.5GHz Intel CPU. Signatures are 41 KB, public keys are 1 KB, and private keys are 1 KB. The signature scheme is designed to provide long-term 2 security even against attackers equipped with quantum computers. Unlike most hash-based designs, this signature scheme is stateless, allowing it to be a drop-in replacement for current signature schemes.",
"title": ""
},
{
"docid": "19548ee85a25f7536783e480e6d80b3b",
"text": "A family of two-phase interleaved LLC (iLLC) resonant converter with hybrid rectifier is proposed for wide output voltage range applications. The primary sides of the two LLC converters are in parallel, and the connection of the secondary windings in the two LLC converters can be regulated by the hybrid rectifier according to the output voltage. Variable frequency control is employed to regulate the output voltage and the secondary windings are in series when the output voltage is high. Fixed-frequency phase-shift control is adopted to regulate the configuration of the secondary windings as well as the output voltage when the output voltage is low. The output voltage range is extended by adaptively changing the configuration of the hybrid rectifier, which results in reduced switching frequency range, circulating current, and conduction losses of the LLC resonant tank. Zero voltage switching and zero current switching are achieved for all the active switches and diodes, respectively, within the entire operation range. The operation principles are analyzed and a 3.5 kW prototype with 400 V input voltage and 150–500 V output voltage is built and tested to evaluate the feasibility of the proposed method.",
"title": ""
},
{
"docid": "f4617250b5654a673219d779952db35f",
"text": "Convolutional neural network (CNN) models have achieved tremendous success in many visual detection and recognition tasks. Unfortunately, visual tracking, a fundamental computer vision problem, is not handled well using the existing CNN models, because most object trackers implemented with CNN do not effectively leverage temporal and contextual information among consecutive frames. Recurrent neural network (RNN) models, on the other hand, are often used to process text and voice data due to their ability to learn intrinsic representations of sequential and temporal data. Here, we propose a novel neural network tracking model that is capable of integrating information over time and tracking a selected target in video. It comprises three components: a CNN extracting best tracking features in each video frame, an RNN constructing video memory state, and a reinforcement learning (RL) agent making target location decisions. The tracking problem is formulated as a decision-making process, and our model can be trained with RL algorithms to learn good tracking policies that pay attention to continuous, inter-frame correlation and maximize tracking performance in the long run. We compare our model with an existing neural-network based tracking method and show that the proposed tracking approach works well in various scenarios by performing rigorous validation experiments on artificial video sequences with ground truth. To the best of our knowledge, our tracker is the first neural-network tracker that combines convolutional and recurrent networks with RL algorithms.",
"title": ""
},
{
"docid": "d300119f7e25b4252d7212ca42b32fb3",
"text": "Various computational procedures or constraint-based methods for data repairing have been proposed over the last decades to identify errors and, when possible, correct them. However, these approaches have several limitations including the scalability and quality of the values to be used in replacement of the errors. In this paper, we propose a new data repairing approach that is based on maximizing the likelihood of replacement data given the data distribution, which can be modeled using statistical machine learning techniques. This is a novel approach combining machine learning and likelihood methods for cleaning dirty databases by value modification. We develop a quality measure of the repairing updates based on the likelihood benefit and the amount of changes applied to the database. We propose SCARE (SCalable Automatic REpairing), a systematic scalable framework that follows our approach. SCARE relies on a robust mechanism for horizontal data partitioning and a combination of machine learning techniques to predict the set of possible updates. Due to data partitioning, several updates can be predicted for a single record based on local views on each data partition. Therefore, we propose a mechanism to combine the local predictions and obtain accurate final predictions. Finally, we experimentally demonstrate the effectiveness, efficiency, and scalability of our approach on real-world datasets in comparison to recent data cleaning approaches.",
"title": ""
},
{
"docid": "7519e3a8326e2ef2ebd28c22e80c4e34",
"text": "This paper presents a synthetic framework identifying the central drivers of start-up commercialization strategy and the implications of these drivers for industrial dynamics. We link strategy to the commercialization environment – the microeconomic and strategic conditions facing a firm that is translating an \" idea \" into a value proposition for customers. The framework addresses why technology entrepreneurs in some environments undermine established firms, while others cooperate with incumbents and reinforce existing market power. Our analysis suggests that competitive interaction between start-up innovators and established firms depends on the presence or absence of a \" market for ideas. \" By focusing on the operating requirements, efficiency, and institutions associated with markets for ideas, this framework holds several implications for the management of high-technology entrepreneurial firms. (Stern). We would like to thank the firms who participate in the MIT Commercialization Strategies survey for their time and effort. The past two decades have witnessed a dramatic increase in investment in technology entrepreneurship – the founding of small, start-up firms developing inventions and technology with significant potential commercial application. Because of their youth and small size, start-up innovators usually have little experience in the markets for which their innovations are most appropriate, and they have at most two or three technologies at the stage of potential market introduction. For these firms, a key management challenge is how to translate promising",
"title": ""
},
{
"docid": "a7b8986dbfde4a7ccc3a4ad6e07319a7",
"text": "This article tests expectations generated by the veto players theory with respect to the over time composition of budgets in a multidimensional policy space. The theory predicts that countries with many veto players (i.e., coalition governments, bicameral political systems, presidents with veto) will have difficulty altering the budget structures. In addition, countries that tend to make significant shifts in government composition will have commensurate modifications of the budget. Data collected from 19 advanced industrialized countries from 1973 to 1995 confirm these expectations, even when one introduces socioeconomic controls for budget adjustments like unemployment variations, size of retired population and types of government (minimum winning coalitions, minority or oversized governments). The methodological innovation of the article is the use of empirical indicators to operationalize the multidimensional policy spaces underlying the structure of budgets. The results are consistent with other analyses of macroeconomic outcomes like inflation, budget deficits and taxation that are changed at a slower pace by multiparty governments. The purpose of this article is to test empirically the expectations of the veto players theory in a multidimensional setting. The theory defines ‘veto players’ as individuals or institutions whose agreement is required for a change of the status quo. The basic prediction of the theory is that when the number of veto players and their ideological distances increase, policy stability also increases (only small departures from the status quo are possible) (Tsebelis 1995, 1999, 2000, 2002). The theory was designed for the study of unidimensional and multidimensional policy spaces. While no policy domain is strictly unidimensional, existing empirical tests have only focused on analyzing political economy issues in a single dimension. These studies have confirmed the veto players theory’s expectations (see Bawn (1999) on budgets; Hallerberg & Basinger (1998) on taxes; Tsebelis (1999) on labor legislation; Treisman (2000) on inflation; Franzese (1999) on budget deficits). This article is the first attempt to test whether the predictions of the veto players theory hold in multidimensional policy spaces. We will study a phenomenon that cannot be considered unidimensional: the ‘structure’ of budgets – that is, their percentage composition, and the change in this composition over © European Consortium for Political Research 2004 Published by Blackwell Publishing Ltd., 9600 Garsington Road, Oxford, OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA",
"title": ""
},
{
"docid": "b1272039194d07ff9b7568b7f295fbfb",
"text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.",
"title": ""
}
] |
scidocsrr
|
bcc1689bceba390d7ad85220196a559f
|
Using CoreSight PTM to Integrate CRA Monitoring IPs in an ARM-Based SoC
|
[
{
"docid": "35258abbafac62dbfbd0be08617e95bf",
"text": "Code Reuse Attacks (CRAs) recently emerged as a new class of security exploits. CRAs construct malicious programs out of small fragments (gadgets) of existing code, thus eliminating the need for code injection. Existing defenses against CRAs often incur large performance overheads or require extensive binary rewriting and other changes to the system software. In this paper, we examine a signature-based detection of CRAs, where the attack is detected by observing the behavior of programs and detecting the gadget execution patterns. We first demonstrate that naive signature-based defenses can be defeated by introducing special “delay gadgets” as part of the attack. We then show how a software-configurable signature-based approach can be designed to defend against such stealth CRAs, including the attacks that manage to use longer-length gadgets. The proposed defense (called SCRAP) can be implemented entirely in hardware using simple logic at the commit stage of the pipeline. SCRAP is realized with minimal performance cost, no changes to the software layers and no implications on binary compatibility. Finally, we show that SCRAP generates no false alarms on a wide range of applications.",
"title": ""
},
{
"docid": "92d5ebd49670681a5d43ba90731ae013",
"text": "Prior work has shown that return oriented programming (ROP) can be used to bypass W⊕X, a software defense that stops shellcode, by reusing instructions from large libraries such as libc. Modern operating systems have since enabled address randomization (ASLR), which randomizes the location of libc, making these techniques unusable in practice. However, modern ASLR implementations leave smaller amounts of executable code unrandomized and it has been unclear whether an attacker can use these small code fragments to construct payloads in the general case. In this paper, we show defenses as currently deployed can be bypassed with new techniques for automatically creating ROP payloads from small amounts of unrandomized code. We propose using semantic program verification techniques for identifying the functionality of gadgets, and design a ROP compiler that is resistant to missing gadget types. To demonstrate our techniques, we build Q, an end-to-end system that automatically generates ROP payloads for a given binary. Q can produce payloads for 80% of Linux /usr/bin programs larger than 20KB. We also show that Q can automatically perform exploit hardening: given an exploit that crashes with defenses on, Q outputs an exploit that bypasses both W⊕X and ASLR. We show that Q can harden nine realworld Linux and Windows exploits, enabling an attacker to automatically bypass defenses as deployed by industry for those programs.",
"title": ""
},
{
"docid": "eb12e9e10d379fcbc156e94c3b447ce1",
"text": "Control-Flow Integrity (CFI) is an effective approach to mitigating control-flow hijacking attacks. Conventional CFI techniques statically extract a control-flow graph (CFG) from a program and instrument the program to enforce that CFG. The statically generated CFG includes all edges for all possible inputs; however, for a concrete input, the CFG may include many unnecessary edges.\n We present Per-Input Control-Flow Integrity (PICFI), which is a new CFI technique that can enforce a CFG computed for each concrete input. PICFI starts executing a program with the empty CFG and lets the program itself lazily add edges to the enforced CFG if such edges are required for the concrete input. The edge addition is performed by PICFI-inserted instrumentation code. To prevent attackers from arbitrarily adding edges, PICFI uses a statically computed all-input CFG to constrain what edges can be added at runtime. To minimize performance overhead, operations for adding edges are designed to be idempotent, so they can be patched to no-ops after their first execution. As our evaluation shows, PICFI provides better security than conventional fine-grained CFI with comparable performance overhead.",
"title": ""
}
] |
[
{
"docid": "0c67628fb24c8cbd4a8e49fb30ba625e",
"text": "Modeling the evolution of topics with time is of great value in automatic summarization and analysis of large document collections. In this work, we propose a new probabilistic graphical model to address this issue. The new model, which we call the Multiscale Topic Tomography Model (MTTM), employs non-homogeneous Poisson processes to model generation of word-counts. The evolution of topics is modeled through a multi-scale analysis using Haar wavelets. One of the new features of the model is its modeling the evolution of topics at various time-scales of resolution, allowing the user to zoom in and out of the time-scales. Our experiments on Science data using the new model uncovers some interesting patterns in topics. The new model is also comparable to LDA in predicting unseen data as demonstrated by our perplexity experiments.",
"title": ""
},
{
"docid": "95e0dfb614103b8beece915dd744e384",
"text": "Failure diagnosis is the process of identifying the causes of impairment in a system’s function based on observable symptoms, i.e., determining which fault led to an observed failure. Since multiple faults can often lead to very similar symptoms, failure diagnosis is often the first line of defense when things go wrong a prerequisite before any corrective actions can be undertaken. The results of diagnosis also provide data about a system’s operational fault profile for use in offline resilience evaluation. While diagnosis has historically been a largely manual process requiring significant human input, techniques to automate as much of the process as possible have significantly grown in importance in many industries including telecommunications, internet services, automotive systems, and aerospace. This chapter presents a survey of automated failure diagnosis techniques including both model-based and model-free approaches. Industrial applications of these techniques in the above domains are presented, and finally, future trends and open challenges in the field are discussed.",
"title": ""
},
{
"docid": "bc0fa704763199526c4f28e40fa11820",
"text": "GPFS is a distributed file system run on some of the largest supercomputers and clusters. Through it's deployment, the authors have been able to gain a number of key insights into the methodology of developing a distributed file system which can reliably scale and maintain POSIX semantics. Achieving the necessary throughput requires parallel access for reading, writing and updating metadata. It is a process that is accomplished mostly through distributed locking.",
"title": ""
},
{
"docid": "001b5a976b6b6ccb15ab80ead4617422",
"text": "Multivariate time-series modeling and forecasting is an important problem with numerous applications. Traditional approaches such as VAR (vector auto-regressive) models and more recent approaches such as RNNs (recurrent neural networks) are indispensable tools in modeling time-series data. In many multivariate time series modeling problems, there is usually a significant linear dependency component, for which VARs are suitable, and a nonlinear component, for which RNNs are suitable. Modeling such times series with only VAR or only RNNs can lead to poor predictive performance or complex models with large training times. In this work, we propose a hybrid model called R2N2 (Residual RNN), which first models the time series with a simple linear model (like VAR) and then models its residual errors using RNNs. R2N2s can be trained using existing algorithms for VARs and RNNs. Through an extensive empirical evaluation on two real world datasets (aviation and climate domains), we show that R2N2 is competitive, usually better than VAR or RNN, used alone. We also show that R2N2 is faster to train as compared to an RNN, while requiring less number of hidden units.",
"title": ""
},
{
"docid": "31f65e3f22aa1d6c05a17efc7e8a9b41",
"text": "Methods The Fenofi brate Intervention and Event Lowering in Diabetes (FIELD) study was a multinational randomised trial of 9795 patients aged 50–75 years with type 2 diabetes mellitus. Eligible patients were randomly assigned to receive fenofi brate 200 mg/day (n=4895) or matching placebo (n=4900). At each clinic visit, information concerning laser treatment for diabetic retinopathy—a prespecifi ed tertiary endpoint of the main study—was gathered. Adjudication by ophthalmologists masked to treatment allocation defi ned instances of laser treatment for macular oedema, proliferative retinopathy, or other eye conditions. In a substudy of 1012 patients, standardised retinal photography was done and photographs graded with Early Treatment Diabetic Retinopathy Study (ETDRS) criteria to determine the cumulative incidence of diabetic retinopathy and its component lesions. Analyses were by intention to treat. This study is registered as an International Standard Randomised Controlled Trial, number ISRCTN64783481.",
"title": ""
},
{
"docid": "ae8e043f980d313499433d49aa90467c",
"text": "During the last few years, Convolutional Neural Networks are slowly but surely becoming the default method solve many computer vision related problems. This is mainly due to the continuous success that they have achieved when applied to certain tasks such as image, speech, or object recognition. Despite all the efforts, object class recognition methods based on deep learning techniques still have room for improvement. Most of the current approaches do not fully exploit 3D information, which has been proven to effectively improve the performance of other traditional object recognition methods. In this work, we propose PointNet, a new approach inspired by VoxNet and 3D ShapeNets, as an improvement over the existing methods by using density occupancy grids representations for the input data, and integrating them into a supervised Convolutional Neural Network architecture. An extensive experimentation was carried out, using ModelNet - a large-scale 3D CAD models dataset - to train and test the system, to prove that our approach is on par with state-of-the-art methods in terms of accuracy while being able to perform recognition under real-time constraints.",
"title": ""
},
{
"docid": "4b8a46065520d2b7489bf0475321c726",
"text": "With computing increasingly becoming more dispersed, relying on mobile devices, distributed computing, cloud computing, etc. there is an increasing threat from adversaries obtaining physical access to some of the computer systems through theft or security breaches. With such an untrusted computing node, a key challenge is how to provide secure computing environment where we provide privacy and integrity for data and code of the application. We propose SecureME, a hardware-software mechanism that provides such a secure computing environment. SecureME protects an application from hardware attacks by using a secure processor substrate, and also from the Operating System (OS) through memory cloaking, permission paging, and system call protection. Memory cloaking hides data from the OS but allows the OS to perform regular virtual memory management functions, such as page initialization, copying, and swapping. Permission paging extends the OS paging mechanism to provide a secure way for two applications to establish shared pages for inter-process communication. Finally, system call protection applies spatio-temporal protection for arguments that are passed between the application and the OS. Based on our performance evaluation using microbenchmarks, single-program workloads, and multiprogrammed workloads, we found that SecureME only adds a small execution time overhead compared to a fully unprotected system. Roughly half of the overheads are contributed by the secure processor substrate. SecureME also incurs a negligible additional storage overhead over the secure processor substrate.",
"title": ""
},
{
"docid": "5ce4e0532bf1f6f122f62b37ba61384e",
"text": "Media violence poses a threat to public health inasmuch as it leads to an increase in real-world violence and aggression. Research shows that fictional television and film violence contribute to both a short-term and a long-term increase in aggression and violence in young viewers. Television news violence also contributes to increased violence, principally in the form of imitative suicides and acts of aggression. Video games are clearly capable of producing an increase in aggression and violence in the short term, although no long-term longitudinal studies capable of demonstrating long-term effects have been conducted. The relationship between media violence and real-world violence and aggression is moderated by the nature of the media content and characteristics of and social influences on the individual exposed to that content. Still, the average overall size of the effect is large enough to place it in the category of known threats to public health.",
"title": ""
},
{
"docid": "8f65f1971405e0c225e3625bb682a2d4",
"text": "We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet (Chang et al. Shapenet: an information-rich 3d model repository, 2015. arXiv:1512.03012) and ModelNet (Wu et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2015) as well as on real robotics data from KITTI (Geiger et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2012) and Kinect (Yang et al., 3d object dense reconstruction from a single depth view, 2018. arXiv:1802.00411), we demonstrate that the proposed amortized maximum likelihood approach is able to compete with the fully supervised baseline of Dai et al. (in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2017) and outperforms the data-driven approach of Engelmann et al. (in: Proceedings of the German conference on pattern recognition (GCPR), 2016), while requiring less supervision and being significantly faster.",
"title": ""
},
{
"docid": "de1f35d0e19cafc28a632984f0411f94",
"text": "Large-pose face alignment is a very challenging problem in computer vision, which is used as a prerequisite for many important vision tasks, e.g, face recognition and 3D face reconstruction. Recently, there have been a few attempts to solve this problem, but still more research is needed to achieve highly accurate results. In this paper, we propose a face alignment method for large-pose face images, by combining the powerful cascaded CNN regressor method and 3DMM. We formulate the face alignment as a 3DMM fitting problem, where the camera projection matrix and 3D shape parameters are estimated by a cascade of CNN-based regressors. The dense 3D shape allows us to design pose-invariant appearance features for effective CNN learning. Extensive experiments are conducted on the challenging databases (AFLW and AFW), with comparison to the state of the art.",
"title": ""
},
{
"docid": "34cd47ff49e316f26e5596bc9717fd6d",
"text": "In this paper, a BGA package having a ARM SoC chip is introduced, which has component-type embedded decoupling capacitors (decaps) for good power integrity performance of core power. To evaluate and confirm the impact of embedded decap on core PDN (power distribution network), two different packages were manufactured with and without the embedded decaps. The self impedances of system-level core PDN were simulated in frequency-domain and On-chip DvD (Dynamic Voltage Drop) simulations were performed in time-domain in order to verify the system-level impact of package embedded decap. There was clear improvement of system-level core PDN performance in middle frequency range when package embedded decaps were employed. In conclusion, the overall system-level core PDN for ARM SoC could meet the target impedance in frequency-domain as well as the target On-chip DvD level by having package embedded decaps.",
"title": ""
},
{
"docid": "2c442933c4729e56e5f4f46b5b8071d6",
"text": "Wireless body area networks consist of several devices placed on the human body, sensing vital signs and providing remote recognition of health disorders. Low power consumption is crucial in these networks. A new energy-efficient topology is provided in this paper, considering relay and sensor nodes' energy consumption and network maintenance costs. In this topology design, relay nodes, placed on the cloth, are used to help the sensor nodes forwarding data to the sink. Relay nodes' situation is determined such that the relay nodes' energy consumption merges the uniform distribution. Simulation results show that the proposed method increases the lifetime of the network with nearly uniform distribution of the relay nodes' energy consumption. Furthermore, this technique simultaneously reduces network maintenance costs and continuous replacements of the designer clothing. The proposed method also determines the way by which the network traffic is split and multipath routed to the sink.",
"title": ""
},
{
"docid": "5a549dbf3037a45a49c9f8f2e91b7aeb",
"text": "How can we reuse existing knowledge, in the form of available datasets, when solving a new and apparently unrelated target task from a set of unlabeled data? In this work we make a first contribution to answer this question in the context of image classification. We frame this quest as an active learning problem and use zero-shot classifiers to guide the learning process by linking the new task to the the existing classifiers. By revisiting the dual formulation of adaptive SVM, we reveal two basic conditions to choose greedily only the most relevant samples to be annotated. On this basis we propose an effective active learning algorithm which learns the best possible target classification model with minimum human labeling effort. Extensive experiments on two challenging datasets show the value of our approach compared to the state-of-the-art active learning methodologies, as well as its potential to reuse past datasets with minimal effort for future tasks.",
"title": ""
},
{
"docid": "9db9902c0e9d5fc24714554625a04c7a",
"text": "Large-scale peer-to-peer systems face security threats from faulty or hostile remote computing elements. To resist these threats, many such systems employ redundancy. However, if a single faulty entity can present multiple identities, it can control a substantial fraction of the system, thereby undermining this redundancy. One approach to preventing these “Sybil attacks” is to have a trusted agency certify identities. This paper shows that, without a logically centralized authority, Sybil attacks are always possible except under extreme and unrealistic assumptions of resource parity and coordination among entities.",
"title": ""
},
{
"docid": "efd7512694ed378cb111c94e53890c89",
"text": "Recent years have seen a significant growth and increased usage of large-scale knowledge resources in both academic research and industry. We can distinguish two main types of knowledge resources: those that store factual information about entities in the form of semantic relations (e.g., Freebase), namely so-called knowledge graphs, and those that represent general linguistic knowledge (e.g., WordNet or UWN). In this article, we present a third type of knowledge resource which completes the picture by connecting the two first types. Instances of this resource are graphs of semantically-associated relations (sar-graphs), whose purpose is to link semantic relations from factual knowledge graphs with their linguistic representations in human language. We present a general method for constructing sar-graphs using a languageand relation-independent, distantly supervised approach which, apart from generic language processing tools, relies solely on the availability of a lexical semantic resource, providing sense information for words, as well as a knowledge base containing seed relation instances. Using these seeds, our method extracts, validates and merges relationspecific linguistic patterns from text to create sar-graphs. To cope with the noisily labeled data arising in a distantly supervised setting, we propose several automatic pattern confidence estimation strategies, and also show how manual supervision can be used to improve the quality of sar-graph instances. We demonstrate the applicability of our method by constructing sar-graphs for 25 semantic relations, of which we make a subset publicly available at http://sargraph.dfki.de. We believe sar-graphs will prove to be useful linguistic resources for a wide variety of natural language processing tasks, and in particular for information extraction and knowledge base population. We illustrate their usefulness with experiments in relation extraction and in computer assisted language learning.",
"title": ""
},
{
"docid": "35ed2c8db6b143629e806b68741e9977",
"text": "Nowadays, smart wristbands have become one of the most prevailing wearable devices as they are small and portable. However, due to the limited size of the touch screens, smart wristbands typically have poor interactive experience. There are a few works appropriating the human body as a surface to extend the input. Yet by using multiple sensors at high sampling rates, they are not portable and are energy-consuming in practice. To break this stalemate, we proposed a portable, cost efficient text-entry system, termed ViType, which firstly leverages a single small form factor sensor to achieve a practical user input with much lower sampling rates. To enhance the input accuracy with less vibration information introduced by lower sampling rate, ViType designs a set of novel mechanisms, including an artificial neural network to process the vibration signals, and a runtime calibration and adaptation scheme to recover the error due to temporal instability. Extensive experiments have been conducted on 30 human subjects. The results demonstrate that ViType is robust to fight against various confounding factors. The average recognition accuracy is 94.8% with an initial training sample size of 20 for each key, which is 1.52 times higher than the state-of-the-art on-body typing system. Furthermore, when turning on the runtime calibration and adaptation system to update and enlarge the training sample size, the accuracy can reach around 98% on average during one month.",
"title": ""
},
{
"docid": "dfba47fd3b84d6346052b559568a0c21",
"text": "Understanding gaming motivations is important given the growing trend of incorporating game-based mechanisms in non-gaming applications. In this paper, we describe the development and validation of an online gaming motivations scale based on a 3-factor model. Data from 2,071 US participants and 645 Hong Kong and Taiwan participants is used to provide a cross-cultural validation of the developed scale. Analysis of actual in-game behavioral metrics is also provided to demonstrate predictive validity of the scale.",
"title": ""
},
{
"docid": "ac4a1b85c72984fb0f25e3603651b8db",
"text": "Deep Reinforcement Learning (deep RL) has made several breakthroughs in recent years in applications ranging from complex control tasks in unmanned vehicles to game playing. Despite their success, deep RL still lacks several important capacities of human intelligence, such as transfer learning, abstraction and interpretability. Deep Symbolic Reinforcement Learning (DSRL) seeks to incorporate such capacities to deep Q-networks (DQN) by learning a relevant symbolic representation prior to using Q-learning. In this paper, we propose a novel extension of DSRL, which we call Symbolic Reinforcement Learning with Common Sense (SRL+CS), offering a better balance between generalization and specialization, inspired by principles of common sense when assigning rewards and aggregating Q-values. Experiments reported in this paper show that SRL+CS learns consistently faster than Q-learning and DSRL, achieving also a higher accuracy. In the hardest case, where agents were trained in a deterministic environment and tested in a random environment, SRL+CS achieves nearly 100% average accuracy compared to DSRL’s 70% and DQN’s 50% accuracy. To the best of our knowledge, this is the first case of near perfect zero-shot transfer learning using Reinforcement Learning.",
"title": ""
},
{
"docid": "56321ec6dfc3d4c55fc99125e942cf44",
"text": "The last decade has seen a substantial body of literature on the recognition of emotion from speech. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead a multiplicity of evaluation strategies employed – such as cross-validation or percentage splits without proper instance definition – prevents exact reproducibility. Further, in order to face more realistic scenarios, the community is in desperate need of more spontaneous and less prototypical data. This INTERSPEECH 2009 Emotion Challenge aims at bridging such gaps between excellent research on human emotion recognition from speech and low compatibility of results. The FAU Aibo Emotion Corpus [1] serves as basis with clearly defined test and training partitions incorporating speaker independence and different room acoustics as needed in most reallife settings. This paper introduces the challenge, the corpus, the features, and benchmark results of two popular approaches towards emotion recognition from speech.",
"title": ""
},
{
"docid": "8b3431783f1dc699be1153ad80348d3e",
"text": "Quality Function Deployment (QFD) was conceived in Japan in the late 1960's, and introduced to America and Europe in 1983. This paper will provide a general overview of the QFD methodology and approach to product development. Once familiarity with the tool is established, a real-life application of the technique will be provided in a case study. The case study will illustrate how QFD was used to develop a new tape product and provide counsel to those that may want to implement the QFD process. Quality function deployment (QFD) is a “method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process.”",
"title": ""
}
] |
scidocsrr
|
0418071bc530f6d1fda3ebf3d2e753cf
|
Enabling 5G backhaul and access with millimeter-waves
|
[
{
"docid": "e541be7c81576fdef564fd7eba5d67dd",
"text": "As the cost of massively broadband® semiconductors continue to be driven down at millimeter wave (mm-wave) frequencies, there is great potential to use LMDS spectrum (in the 28-38 GHz bands) and the 60 GHz band for cellular/mobile and peer-to-peer wireless networks. This work presents urban cellular and peer-to-peer RF wideband channel measurements using a broadband sliding correlator channel sounder and steerable antennas at carrier frequencies of 38 GHz and 60 GHz, and presents measurements showing the propagation time delay spread and path loss as a function of separation distance and antenna pointing angles for many types of real-world environments. The data presented here show that at 38 GHz, unobstructed Line of Site (LOS) channels obey free space propagation path loss while non-LOS (NLOS) channels have large multipath delay spreads and can exploit many different pointing angles to provide propagation links. At 60 GHz, there is notably more path loss, smaller delay spreads, and fewer unique antenna angles for creating a link. For both 38 GHz and 60 GHz, we demonstrate empirical relationships between the RMS delay spread and antenna pointing angles, and observe that excess path loss (above free space) has an inverse relationship with transmitter-to-receiver separation distance.",
"title": ""
},
{
"docid": "ed676ff14af6baf9bde3bdb314628222",
"text": "The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage.",
"title": ""
}
] |
[
{
"docid": "5cd9031a58457c0cb5fb2d49f1da40f6",
"text": "Induction heating (IH) technology is nowadays the heating technology of choice in many industrial, domestic, and medical applications due to its advantages regarding efficiency, fast heating, safety, cleanness, and accurate control. Advances in key technologies, i.e., power electronics, control techniques, and magnetic component design, have allowed the development of highly reliable and cost-effective systems, making this technology readily available and ubiquitous. This paper reviews IH technology summarizing the main milestones in its development and analyzing the current state of art of IH systems in industrial, domestic, and medical applications, paying special attention to the key enabling technologies involved. Finally, an overview of future research trends and challenges is given, highlighting the promising future of IH technology.",
"title": ""
},
{
"docid": "f96bf84a4dfddc8300bb91227f78b3af",
"text": "Membership diversity is a characteristic aspect of social networks in which a person may belong to more than one social group. For this reason, discovering overlapping structures is necessary for realistic social analysis. In this paper, we present a fast algorithm, called SLPA, for overlapping community detection in large-scale networks. SLPA spreads labels according to dynamic interaction rules. It can be applied to both unipartite and bipartite networks. It is also able to uncover overlapping nested hierarchy. The time complexity of SLPA scales linearly with the number of edges in the network. Experiments in both synthetic and realworld networks show that SLPA has an excellent performance in identifying both node and community level overlapping structures.",
"title": ""
},
{
"docid": "fe55de8d3317d590a8cf9e4bb0e6c3c1",
"text": "This article examines the key global, environmental and policy factors that act as determinants of e-commerce diffusion. It is based on systematic comparison of case studies from 10 countries— Brazil, China, Denmark, France, Germany, Mexico, Japan, Singapore, Taiwan, and the United States. It finds that B2B e-commerce seems to be driven by global forces, whereas B2C seems to be more of a local phenomenon. A preliminary explanation for this difference is that B2B is driven by global competition and MNCs that “push” e-commerce to their global suppliers, customers, and subsidiaries. This in turn creates pressures on local companies to adopt e-commerce to stay competitive. In contrast, B2C is “pulled” by consumer markets, which are mainly local and therefore divergent. While all consumers desire convenience and low prices, consumer preferences and values, national culture, and distribution systems differ markedly across countries and define differences in local consumer markets. These findings support the transformation perspective about globalization and its impacts. In terms of policy, the case studies suggest that enabling policies such as trade and telecommunications liberalization are likely to have the biggest impact on e-commerce, by making ICT and",
"title": ""
},
{
"docid": "fe30f2867a2b0419706d0e9fccbff65f",
"text": "Since handwriting recognition is very sensitive to structural noise, like superimposed objects such as straight lines and other marks, it is necessary to remove noise in a preprocessing stage before recognition. Although numerous denoising approaches have been proposed, it remains a challenge. The difficulties are due to non-locality of structural noise and hard discernment between spurious and the meaningful regions. We propose a supervised approach using deep learning to remove structural noise. Specifically, we generalize the deep autoencoder into the deep denoising autoencoder (DDAE), which consists in training a neural network with noisy and clean pairs to minimize cross-entropy error. Inspired by recurrent neural networks, we introduce feedback loop from the output to enhance the \"repaired\" image well in the reconstruction stage in our framework. We test the DDAE model on three handwritten image data sets, and show advantages over Wiener filter, robust Boltzmann machines and deep autoencoder.",
"title": ""
},
{
"docid": "932813bc4a6ccbb81c9a9698b96f3694",
"text": "The fast growing deep learning technologies have become the main solution of many machine learning problems for medical image analysis. Deep convolution neural networks (CNNs), as one of the most important branch of the deep learning family, have been widely investigated for various computer-aided diagnosis tasks including long-term problems and continuously emerging new problems. Image contour detection is a fundamental but challenging task that has been studied for more than four decades. Recently, we have witnessed the significantly improved performance of contour detection thanks to the development of CNNs. Beyond purusing performance in existing natural image benchmarks, contour detection plays a particularly important role in medical image analysis. Segmenting various objects from radiology images or pathology images requires accurate detection of contours. However, some problems, such as discontinuity and shape constraints, are insufficiently studied in CNNs. It is necessary to clarify the challenges to encourage further exploration. The performance of CNN based contour detection relies on the state-of-the-art CNN architectures. Careful investigation of their design principles and motivations is critical and beneficial to contour detection. In this paper, we first review recent development of medical image contour detection and point out the current confronting challenges and problems. We discuss the development of general CNNs and their applications in image contours (or edges) detection. We compare those methods in detail, clarify their strengthens and weaknesses. Then we review their recent applications in medical image analysis and point out limitations, with the goal to light some potential directions in medical image analysis. We expect the paper to cover comprehensive technical ingredients of advanced CNNs to enrich the study in the medical image domain. 1E-mail: [email protected] Preprint submitted to arXiv August 26, 2018 ar X iv :1 70 8. 07 28 1v 1 [ cs .C V ] 2 4 A ug 2 01 7",
"title": ""
},
{
"docid": "2a3f5f621195c036064e3d8c0b9fc884",
"text": "This paper describes our system for the CoNLL 2016 Shared Task’s supplementary task on Discourse Relation Sense Classification. Our official submission employs a Logistic Regression classifier with several cross-argument similarity features based on word embeddings and performs with overall F-scores of 64.13 for the Dev set, 63.31 for the Test set and 54.69 for the Blind set, ranking first in the Overall ranking for the task. We compare the feature-based Logistic Regression classifier to different Convolutional Neural Network architectures. After the official submission we enriched our model for Non-Explicit relations by including similarities of explicit connectives with the relation arguments, and part of speech similarities based on modal verbs. This improved our Non-Explicit result by 1.46 points on the Dev set and by 0.36 points on the Blind set.",
"title": ""
},
{
"docid": "86cb3c072e67bed8803892b72297812c",
"text": "Internet of Things (IoT) will comprise billions of devices that can sense, communicate, compute and potentially actuate. Data streams coming from these devices will challenge the traditional approaches to data management and contribute to the emerging paradigm of big data. This paper discusses emerging Internet of Things (IoT) architecture, large scale sensor network applications, federating sensor networks, sensor data and related context capturing techniques, challenges in cloud-based management, storing, archiving and processing of",
"title": ""
},
{
"docid": "1492c9c12d2ae969e1b45831f642943f",
"text": "In this paper, a novel polarization-reconfigurable converter (PRC) is proposed based on a multilayer frequency-selective surface (MFSS). First, the MFSS is designed using the square patches and the grid lines array to determine the operational frequency and bandwidth, and then the corners of the square patches are truncated to produce the phase difference of 90° between the two orthogonal linear components for circular polarization performance. To analyze and synthesize the PRC array, the operational mechanism is described in detail. The relation of the polarization states as a function of the rotating angle of the PRC array is summarized from the principle of operation. Therefore, the results show that the linear polarization (LP) from an incident wave can be reconfigured to LP, right- and left-hand circular polarizations by rotating the free-standing converter screen. The cell periods along x- and y-directions are the same, and their total height is 6 mm. The fractional bandwidth of axial ratio (AR) less than 3 dB is more than 15% with respect to the center operating frequency of 10 GHz at normal incidence. Simultaneously, the AR characteristics of different incidence angles for oblique incidence with TE and TM polarizations show that the proposed PRC has good polarization and angle stabilities. Moreover, the general design procedure and method is presented. Finally, a circularly shaped PRC array using the proposed PRC element based on the MFSS design is fabricated and measured. The agreement between the simulated and measured results is excellent.",
"title": ""
},
{
"docid": "45840f792b397da02fadc644d35faaf7",
"text": "Do there exist general principles, which any system must obey in order to achieve advanced general intelligence using feasible computational resources? Here we propose one candidate: “cognitive synergy,” a principle which suggests that general intelligences must contain different knowledge creation mechanisms corresponding to different sorts of memory (declarative, procedural, sensory/episodic, attentional, intentional); and that these different mechanisms must be interconnected in such a way as to aid each other in overcoming memory-type-specific combinatorial explosions.",
"title": ""
},
{
"docid": "997b9f66d7695c8694936f2f0965d197",
"text": "The DETER project aims to advance cybersecurity research and education. Over the past seven years, the project has focused on improving and redefining the methods, technology, and infrastructure for developing cyberdefense technology. The project's research results are put into practice by DeterLab, a public, free-for-use experimental facility available to researchers and educators worldwide. Educators can use DeterLab's exercises to teach cybersecurity technology and practices. This use of DeterLab provides valuable feedback on DETER innovations and helps grow the pool of cybersecurity innovators and cyberdefenders.",
"title": ""
},
{
"docid": "9a41380c2f94f222fd31ae1428bdbb17",
"text": "This paper presents a compact system-on-package-based front-end solution for 60-GHz-band wireless communication/sensor applications that consists of fully integrated three-dimensional (3-D) cavity filters/duplexers and antenna. The presented concept is applied to the design, fabrication, and testing of V-band (receiver (Rx): 59-61.5 GHz, transmitter (Tx): 61.5-64 GHz) transceiver front-end module using multilayer low-temperature co-fired ceramic technology. Vertically stacked 3-D low-loss cavity bandpass filters are developed for Rx and Tx channels to realize a fully integrated compact duplexer. Each filter exhibits excellent performance (Rx: IL<2.37 dB, 3-dB bandwidth (BW) /spl sim/3.5%, Tx: IL<2.39 dB, 3-dB BW /spl sim/3.33%). The fabrication tolerances contributing to the resonant frequency experimental downshift were investigated and taken into account in the simulations of the rest devices. The developed cavity filters are utilized to realize the compact duplexers by using microstrip T-junctions. This integrated duplexer shows Rx/Tx BW of 4.20% and 2.66% and insertion loss of 2.22 and 2.48 dB, respectively. The different experimental results of the duplexer compared to the individual filters above are attributed to the fabrication tolerance, especially on microstrip T-junctions. The measured channel-to-channel isolation is better than 35.2 dB across the Rx band (56-58.4 GHz) and better than 38.4 dB across the Tx band (59.3-60.9 GHz). The reported fully integrated Rx and Tx filters and the dual-polarized cross-shaped patch antenna functions demonstrate a novel 3-D deployment of embedded components equipped with an air cavity on the top. The excellent overall performance of the full integrated module is verified through the 10-dB BW of 2.4 GHz (/spl sim/4.18%) at 57.45 and 2.3 GHz (/spl sim/3.84%) at 59.85 GHz and the measured isolation better than 49 dB across the Rx band and better than 51.9 dB across the Tx band.",
"title": ""
},
{
"docid": "6d28e7e400c58e3eb83621dae703fbfa",
"text": "Recently, several papers for reading meters remotely using RFID/USN technologies have been presented. In the case of wireless water meters, there has been neither commercial product nor paper. In this paper, we describe the design and implementation of wireless digital water meter with low power consumption. We use magnetic hole sensors to compute the amount of water consumption. The meter of water consumption is transferred via ZigBee wireless protocol to a gateway. Low power consumption design is essential since a battery should last till the life time of water meter. We suggest that dual batteries having 3 V, 3000 mAh, would last 8 years by analyzing the real power consumption of our water meter.",
"title": ""
},
{
"docid": "6b0349726d029403279ab32355bf74d4",
"text": "This paper is about tracking an extended object or a group target, which gives rise to a varying number of measurements from different measurement sources. For this purpose, the shape of the target is tracked in addition to its kinematics. The target extent is modeled with a new approach called Random Hypersurface Model (RHM) that assumes varying measurement sources to lie on scaled versions of the shape boundaries. In this paper, a star-convex RHM is introduced for tracking star-convex shape approximations of targets. Bayesian inference for star-convex RHMs is performed by means of a Gaussian-assumed state estimator allowing for an efficient recursive closed-form measurement update. Simulations demonstrate the performance of this approach for typical extended object and group tracking scenarios.",
"title": ""
},
{
"docid": "68e137f9c722f833a7fdbc8032fc58be",
"text": "BACKGROUND\nChronic Obstructive Pulmonary Disease (COPD) has been a leading cause of morbidity and mortality worldwide, over the years. In 1995, the implementation of a respiratory function survey seemed to be an adequate way to draw attention to neglected respiratory symptoms and increase the awareness of spirometry surveys. By 2002 there were new consensual guidelines in place and the awareness that prevalence of COPD depended on the criteria used for airway obstruction definition. The purpose of this study is to revisit the two studies and to turn public some of the data and respective methodologies.\n\n\nMETHODS\nFrom Pneumobil study database of 12,684 subjects, only the individuals with 40+ years old (n = 9.061) were selected. The 2002 study included a randomized representative sample of 1,384 individuals with 35-69 years old.\n\n\nRESULTS\nThe prevalence of COPD was 8.96% in Pneumobil and 5.34% in the 2002 study. In both studies, presence of COPD was greater in males and there was a positive association between presence of COPD and older age groups. Smokers and ex-smokers showed a higher proportion of cases of COPD.\n\n\nCONCLUSIONS\nPrevalence in Portugal is lower than in other European countries. This may be related to lower smokers' prevalence. Globally, the most important risk factors associated with COPD were age over 60 years, male gender and smoking exposure. All aspects and limitations regarding different recruitment methodologies and different criteria for defining COPD cases highlight the need of a standardized method to evaluate COPD prevalence and associated risks factors, whose results can be compared across countries, as it is the case of BOLD project.",
"title": ""
},
{
"docid": "4e8f7fdba06ae7973e3d25cf35399aaf",
"text": "Endometriosis is a benign and common disorder that is characterized by ectopic endometrium outside the uterus. Extrapelvic endometriosis, like of the vulva, is rarely seen. We report a case of a 47-year-old woman referred to our clinic due to complaints of a vulvar mass and periodic swelling of the mass at the time of menstruation. During surgery, the cyst ruptured and a chocolate-colored liquid escaped onto the surgical field. The cyst was extirpated totally. Hipstopathological examination showed findings compatible with endometriosis. She was asked to follow-up after three weeks. The patient had no complaints and the incision field was clear at the follow-up.",
"title": ""
},
{
"docid": "9b5224b94b448d5dabbd545aedd293f8",
"text": "the topic (a) has been dedicated to extolling its use as a decisionmaking criterion; (b) has presented isolated numerical examples of its calculation/determination; and (c) has considered it as part of the general discussions of profitability and discussed its role in customer acquisition decisions and customer acquisition/retention trade-offs. There has been a dearth of general modeling of the topic. This paper presents a series of mathematical models for determination of customer lifetime value. The choice of the models is based on a systematic theoretical taxonomy and on assumptions grounded in customer behavior. In NADA I. NASR is a doctoral student in Marketing at the School addition, selected managerial applications of these general models of of Management, Boston University. customer lifetime value are offered. 1998 John Wiley & Sons, Inc. and Direct Marketing Educational Foundation, Inc. CCC 1094-9968/98/010017-14",
"title": ""
},
{
"docid": "685e6338727b4ab899cffe2bbc1a20fc",
"text": "Existing code similarity comparison methods, whether source or binary code based, are mostly not resilient to obfuscations. In the case of software plagiarism, emerging obfuscation techniques have made automated detection increasingly difficult. In this paper, we propose a binary-oriented, obfuscation-resilient method based on a new concept, longest common subsequence of semantically equivalent basic blocks, which combines rigorous program semantics with longest common subsequence based fuzzy matching. We model the semantics of a basic block by a set of symbolic formulas representing the input-output relations of the block. This way, the semantics equivalence (and similarity) of two blocks can be checked by a theorem prover. We then model the semantics similarity of two paths using the longest common subsequence with basic blocks as elements. This novel combination has resulted in strong resiliency to code obfuscation. We have developed a prototype and our experimental results show that our method is effective and practical when applied to real-world software.",
"title": ""
},
{
"docid": "291f3f95cf06f6ac3bda91178ee3ce1b",
"text": "this paper discusses the several research methodologies that can be used in Computer Science (CS) and Information Systems (IS). The research methods vary according to the science domain and project field. However a little of research methodologies can be reasonable for CS and IS. KeywordsComputer Science(CS), Information Systems (IS),Research Methodologies.",
"title": ""
},
{
"docid": "f74cfc71a268b2155fe6920b00bc62d4",
"text": "The vaginal microbiome in healthy women changes over short periods of time, differs among individuals, and varies in its response to sexual intercourse.",
"title": ""
},
{
"docid": "b03ae1c57ed0e5c49fb99a8232d694d6",
"text": "Introduction The Neolithic Hongshan Culture flourished between 4500 and 3000 BCE in what is today northeastern China and Inner Mongolia (Figure 1). Village sites are found in the northern part of the region, while the two ceremonial sites of Dongshanzui and Niuheliang are located in the south, where villages are fewer (Guo 1995, Li 2003). The Hongshan inhabitants included agriculturalists who cultivated millet and pigs for subsistence, and accomplished artisans who carved finely crafted jades and made thin black-on-red pottery. Organized labor of a large number of workers is suggested by several impressive constructions, including an artificial hill containing three rings of marble-like stone, several high cairns with elaborate interiors and a 22 meter long building which contained fragments of life-sized statues. One fragment was a face with inset green jade eyes (Figure 2). A ranked society is implied by the burials, which include decorative jades made in specific, possibly iconographic, shapes. It has been argued previously that the sizes and locations of the mounded tombs imply at least three elite ranks (Nelson 1996).",
"title": ""
}
] |
scidocsrr
|
9feb0d3750b4d5da6182e3264f8cedab
|
Depth Silhouettes Context: A New Robust Feature for Human Tracking and Activity Recognition based on Advanced Hidden Markov Model
|
[
{
"docid": "fa440af1d9ec65caf3cd37981919b56e",
"text": "We present a method for spotting sporadically occurring gestures in a continuous data stream from body-worn inertial sensors. Our method is based on a natural partitioning of continuous sensor signals and uses a two-stage approach for the spotting task. In a first stage, signal sections likely to contain specific motion events are preselected using a simple similarity search. Those preselected sections are then further classified in a second stage, exploiting the recognition capabilities of hidden Markov models. Based on two case studies, we discuss implementation details of our approach and show that it is a feasible strategy for the spotting of various types of motion events. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "37aca8c5ec945d4a91984683538b0bc6",
"text": "Little is known about the neurobiological mechanisms underlying prosocial decisions and how they are modulated by social factors such as perceived group membership. The present study investigates the neural processes preceding the willingness to engage in costly helping toward ingroup and outgroup members. Soccer fans witnessed a fan of their favorite team (ingroup member) or of a rival team (outgroup member) experience pain. They were subsequently able to choose to help the other by enduring physical pain themselves to reduce the other's pain. Helping the ingroup member was best predicted by anterior insula activation when seeing him suffer and by associated self-reports of empathic concern. In contrast, not helping the outgroup member was best predicted by nucleus accumbens activation and the degree of negative evaluation of the other. We conclude that empathy-related insula activation can motivate costly helping, whereas an antagonistic signal in nucleus accumbens reduces the propensity to help.",
"title": ""
},
{
"docid": "6dc4cefb15977ba4b4f33f7ce792196a",
"text": "Fuel cells convert chemical energy directly into electrical energy with high efficiency and low emission of pollutants. However, before fuel-cell technology can gain a significant share of the electrical power market, important issues have to be addressed. These issues include optimal choice of fuel, and the development of alternative materials in the fuel-cell stack. Present fuel-cell prototypes often use materials selected more than 25 years ago. Commercialization aspects, including cost and durability, have revealed inadequacies in some of these materials. Here we summarize recent progress in the search and development of innovative alternative materials.",
"title": ""
},
{
"docid": "432e7ae2e76d76dbb42d92cd9103e3d2",
"text": "Previous work has used monolingual parallel corpora to extract and generate paraphrases. We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. Using alignment techniques from phrasebased statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments.",
"title": ""
},
{
"docid": "f6f045cad34d50eea8517ee9fbb3da57",
"text": "The increasing rate of high (secondary) school leavers choosing academic majors to study at the university without proper guidance has most times left students with unfavorable consequences including low grades, extra year(s), the need to switch programs and ultimately having to withdraw from the university. In a bid to proffer a solution to the issue, this research aims to build an expert system that recommends university or academic majors to high school students in developing countries where there is a dearth of human career counselors. This is to reduce the adverse effects caused as a result of wrong choices made by students. A mobile rule-based expert system supported with ontology was developed for easy accessibility by the students.",
"title": ""
},
{
"docid": "6f30153ddb49d6cec554dbec53f0ad0e",
"text": "Recommendations can greatly benefit from good representations of the user state at recommendation time. Recent approaches that leverage Recurrent Neural Networks (RNNs) for session-based recommendations have shown that Deep Learning models can provide useful user representations for recommendation. However, current RNN modeling approaches summarize the user state by only taking into account the sequence of items that the user has interacted with in the past, without taking into account other essential types of context information such as the associated types of user-item interactions, the time gaps between events and the time of day for each interaction. To address this, we propose a new class of Contextual Recurrent Neural Networks for Recommendation (CRNNs) that can take into account the contextual information both in the input and output layers and modifying the behavior of the RNN by combining the context embedding with the item embedding and more explicitly, in the model dynamics, by parametrizing the hidden unit transitions as a function of context information. We compare our CRNNs approach with RNNs and non-sequential baselines and show good improvements on the next event prediction task.",
"title": ""
},
{
"docid": "fb3002fff98d4645188910989638af69",
"text": "Stress is important in substance use disorders (SUDs). Mindfulness training (MT) has shown promise for stress-related maladies. No studies have compared MT to empirically validated treatments for SUDs. The goals of this study were to assess MT compared to cognitive behavioral therapy (CBT) in substance use and treatment acceptability, and specificity of MT compared to CBT in targeting stress reactivity. Thirty-six individuals with alcohol and/or cocaine use disorders were randomly assigned to receive group MT or CBT in an outpatient setting. Drug use was assessed weekly. After treatment, responses to personalized stress provocation were measured. Fourteen individuals completed treatment. There were no differences in treatment satisfaction or drug use between groups. The laboratory paradigm suggested reduced psychological and physiological indices of stress during provocation in MT compared to CBT. This pilot study provides evidence of the feasibility of MT in treating SUDs and suggests that MT may be efficacious in targeting stress.",
"title": ""
},
{
"docid": "86cdce8b04818cc07e1003d85305bd40",
"text": "Balanced graph partitioning is a well known NP-complete problem with a wide range of applications. These applications include many large-scale distributed problems including the optimal storage of large sets of graph-structured data over several hosts-a key problem in today's Cloud infrastructure. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because they typically involve frequent global operations over the entire graph. In this paper, we propose a fully distributed algorithm, called JA-BE-JA, that uses local search and simulated annealing techniques for graph partitioning. The algorithm is massively parallel: there is no central coordination, each node is processed independently, and only the direct neighbors of the node, and a small subset of random nodes in the graph need to be known locally. Strict synchronization is not required. These features allow JA-BE-JA to be easily adapted to any distributed graph-processing system from data centers to fully distributed networks. We perform a thorough experimental analysis, which shows that the minimal edge-cut value achieved by JA-BE-JA is comparable to state-of-the-art centralized algorithms such as METIS. In particular, on large social networks JA-BEJA outperforms METIS, which makes JA-BE-JA-a bottom-up, self-organizing algorithm-a highly competitive practical solution for graph partitioning.",
"title": ""
},
{
"docid": "6c1317ef88110756467a10c4502851bb",
"text": "Deciding query equivalence is an important problem in data management with many practical applications. Solving the problem, however, is not an easy task. While there has been a lot of work done in the database research community in reasoning about the semantic equivalence of SQL queries, prior work mainly focuses on theoretical limitations. In this paper, we present COSETTE, a fully automated prover that can determine the equivalence of SQL queries. COSETTE leverages recent advances in both automated constraint solving and interactive theorem proving, and returns a counterexample (in terms of input relations) if two queries are not equivalent, or a proof of equivalence otherwise. Although the problem of determining equivalence for arbitrary SQL queries is undecidable, our experiments show that COSETTE can determine the equivalences of a wide range of queries that arise in practice, including conjunctive queries, correlated queries, queries with outer joins, and queries with aggregates. Using COSETTE, we have also proved the validity of magic set rewrites, and confirmed various real-world query rewrite errors, including the famous COUNT bug. We are unaware of any prior tool that can automatically determine the equivalences of a broad range of queries as COSETTE, and believe that our tool represents a major step towards building provably-correct query optimizers for real-world database systems.",
"title": ""
},
{
"docid": "bc388488c5695286fe7d7e56ac15fa94",
"text": "In this paper a new parking guiding and information system is described. The system assists the user to find the most suitable parking space based on his/her preferences and learned behavior. The system takes into account parameters such as driver's parking duration, arrival time, destination, type preference, cost preference, driving time, and walking distance as well as time-varying parking rules and pricing. Moreover, a prediction algorithm is proposed to forecast the parking availability for different parking locations for different times of the day based on the real-time parking information, and previous parking availability/occupancy data. A novel server structure is used to implement the system. Intelligent parking assist system reduces the searching time for parking spots in urban environments, and consequently leads to a reduction in air pollutions and traffic congestion. On-street parking meters, off-street parking garages, as well as free parking spaces are considered in our system.",
"title": ""
},
{
"docid": "3f657657a24c03038bd402498b7abddd",
"text": "We propose a system for real-time animation of eyes that can be interactively controlled in a WebGL enabled device using a small number of animation parameters, including gaze. These animation parameters can be obtained using traditional keyframed animation curves, measured from an actor's performance using off-the-shelf eye tracking methods, or estimated from the scene observed by the character, using behavioral models of human vision. We present a model of eye movement, that includes not only movement of the globes, but also of the eyelids and other soft tissues in the eye region. The model includes formation of expression wrinkles in soft tissues. To our knowledge this is the first system for real-time animation of soft tissue movement around the eyes based on gaze input.",
"title": ""
},
{
"docid": "b2124dfd12529c1b72899b9866b34d03",
"text": "In today's world, the amount of stored information has been enormously increasing day by day which is generally in the unstructured form and cannot be used for any processing to extract useful information, so several techniques such as summarization, classification, clustering, information extraction and visualization are available for the same which comes under the category of text mining. Text Mining can be defined as a technique which is used to extract interesting information or knowledge from the text documents. Text mining, also known as text data mining or knowledge discovery from textual databases, refers to the process of extracting interesting and non-trivial patterns or knowledge from text documents. Regarded by many as the next wave of knowledge discovery, text mining has very high commercial values.",
"title": ""
},
{
"docid": "ef81266ae8c2023ea35dca8384db3803",
"text": "Linked Open Data has been recognized as a useful source of background knowledge for building content-based recommender systems. Vast amount of RDF data, covering multiple domains, has been published in freely accessible datasets. In this paper, we present an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs used for building content-based recommender system. We generate sequences by leveraging local information from graph sub-structures and learn latent numerical representations of entities in RDF graphs. Our evaluation on two datasets in the domain of movies and books shows that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be effectively used in content-based recommender systems.",
"title": ""
},
{
"docid": "80504ceedad8eb61c55d6b3aea91b97e",
"text": "In this paper, an airport departure scheduling tool for aircraft is presented based on constraint satisfaction techniques. Airports are getting more and more congested with the available runway configuration as one of the most constraining factors. A possibility to alleviate this congestion is to assist controllers in the planning and scheduling process of aircraft. The prototype presented here is aimed to offer such assistance in the establishment of an optimal departure schedule and the planning of initial climb phases for departing aircraft. This goal is accomplished by modelling the scheduling problem as a constraint satisfaction problem, using ILOG Solver and Scheduler as an implementation environment.",
"title": ""
},
{
"docid": "c53021193518ebdd7006609463bafbcc",
"text": "BACKGROUND AND OBJECTIVES\nSleep is important to child development, but there is limited understanding of individual developmental patterns of sleep, their underlying determinants, and how these influence health and well-being. This article explores the presence of various sleep patterns in children and their implications for health-related quality of life.\n\n\nMETHODS\nData were collected from the Longitudinal Study of Australian Children. Participants included 2926 young children followed from age 0 to 1 years to age 6 to 7 years. Data on sleep duration were collected every 2 years, and covariates (eg, child sleep problems, maternal education) were assessed at baseline. Growth mixture modeling was used to identify distinct longitudinal patterns of sleep duration and significant covariates. Linear regression examined whether the distinct sleep patterns were significantly associated with health-related quality of life.\n\n\nRESULTS\nThe results identified 4 distinct sleep duration patterns: typical sleepers (40.6%), initially short sleepers (45.2%), poor sleepers (2.5%), and persistent short sleepers (11.6%). Factors such as child sleep problems, child irritability, maternal employment, household financial hardship, and household size distinguished between the trajectories. The results demonstrated that the trajectories had different implications for health-related quality of life. For instance, persistent short sleepers had poorer physical, emotional, and social health than typical sleepers.\n\n\nCONCLUSIONS\nThe results provide a novel insight into the nature of child sleep and the implications of differing sleep patterns for health-related quality of life. The findings could inform the development of effective interventions to promote healthful sleep patterns in children.",
"title": ""
},
{
"docid": "be8864d6fb098c8a008bfeea02d4921a",
"text": "Active testing has recently been introduced to effectively test concurrent programs. Active testing works in two phases. It first uses predictive off-the-shelf static or dynamic program analyses to identify potential concurrency bugs, such as data races, deadlocks, and atomicity violations. In the second phase, active testing uses the reports from these predictive analyses to explicitly control the underlying scheduler of the concurrent program to accurately and quickly discover real concurrency bugs, if any, with very high probability and little overhead. In this paper, we present an extensible framework for active testing of Java programs. The framework currently implements three active testers based on data races, atomic blocks, and deadlocks.",
"title": ""
},
{
"docid": "4f5f128195592fe881269f54fd3424e7",
"text": "In this research, a new method is proposed for the optimization of warship spare parts stock with genetic algorithm. Warships should fulfill her duties in all circumstances. Considering the warships have more than a hundred thousand unique parts, it is a very hard problem to decide which spare parts should be stocked at warehouse aiming to use in case of failure. In this study, genetic algorithm that is a heuristic optimization method is used to solve this problem. The demand quantity, the criticality and the cost of parts is used for optimization. A genetic algorithm with very long chromosome is used, i.e. over 1000 genes in one chromosome. The outputs of the method is analyzed and compared with the Price Sensitive 0.5 FLSIP+ model, which is widely used over navies, and came to a conclusion that the proposed method is better.",
"title": ""
},
{
"docid": "050dd71858325edd4c1a42fc1a25de95",
"text": "This paper presents Disco, a prototype for supporting knowledge workers in exploring, reviewing and sorting collections of textual data. The goal is to facilitate, accelerate and improve the discovery of information. To this end, it combines Semantic Relatedness techniques with a review workflow developed in a tangible environment. Disco uses a semantic model that is leveraged on-line in the course of search sessions, and accessed through natural hand-gesture, in a simple and intuitive way.",
"title": ""
},
{
"docid": "ae5bf888ce9a61981be60b9db6fc2d9c",
"text": "Inverting the hash values by performing brute force computation is one of the latest security threats on password based authentication technique. New technologies are being developed for brute force computation and these increase the success rate of inversion attack. Honeyword base authentication protocol can successfully mitigate this threat by making password cracking detectable. However, the existing schemes have several limitations like Multiple System Vulnerability, Weak DoS Resistivity, Storage Overhead, etc. In this paper we have proposed a new honeyword generation approach, identified as Paired Distance Protocol (PDP) which overcomes almost all the drawbacks of previously proposed honeyword generation approaches. The comprehensive analysis shows that PDP not only attains a high detection rate of 97.23% but also reduces the storage cost to a great extent.",
"title": ""
},
{
"docid": "e8b486ce556a0193148ffd743661bce9",
"text": "This chapter presents the fundamentals and applications of the State Machine Replication (SMR) technique for implementing consistent fault-tolerant services. Our focus here is threefold. First we present some fundamentals about distributed computing and three “practical” SMR protocols for different fault models. Second, we discuss some recent work aiming to improve the performance, modularity and robustness of SMR protocols. Finally, we present some prominent applications for SMR and an example of the real code needed for implementing a dependable service using the BFT-SMART replication library.",
"title": ""
},
{
"docid": "0ef117ca4663f523d791464dad9a7ebf",
"text": "In this paper, a circularly polarized, omnidirectional side-fed bifilar helix antenna, which does not require a ground plane is presented. The antenna has a height of less than 0.1λ and the maximum boresight gain of 1.95dB, with 3dB beamwidth of 93°. The impedance bandwidth of the antenna for VSWR≤2 (with reference to resonant input resistance of 25Ω) is 2.7%. The simulated axial ratio(AR) at the resonant frequency 860MHz is 0.9 ≤AR≤ 1.0 in the whole hemisphere except small region around the nulls. The polarization bandwidth for AR≤3dB is 34.7%. The antenna is especially useful for high speed aerodynamic bodies made of composite materials (such as UAVs) where low profile antennas are essential to reduce air resistance and/or proper metallic ground is not available for monopole-type antenna.",
"title": ""
}
] |
scidocsrr
|
f48f65051d76893f827a434bfb69ab6a
|
Diversifying query suggestions based on query documents
|
[
{
"docid": "dc418c7add2456b08bc3a6f15b31da9f",
"text": "In professional search environments, such as patent search or legal search, search tasks have unique characteristics: 1) users interactively issue several queries for a topic, and 2) users are willing to examine many retrieval results, i.e., there is typically an emphasis on recall. Recent surveys have also verified that professional searchers continue to have a strong preference for Boolean queries because they provide a record of what documents were searched. To support this type of professional search, we propose a novel Boolean query suggestion technique. Specifically, we generate Boolean queries by exploiting decision trees learned from pseudo-labeled documents and rank the suggested queries using query quality predictors. We evaluate our algorithm in simulated patent and medical search environments. Compared with a recent effective query generation system, we demonstrate that our technique is effective and general.",
"title": ""
},
{
"docid": "12be3f9c1f02ad3f26462ab841a80165",
"text": "Queries in patent prior art search are full patent applications and much longer than standard ad hoc search and web search topics. Standard information retrieval (IR) techniques are not entirely effective for patent prior art search because of ambiguous terms in these massive queries. Reducing patent queries by extracting key terms has been shown to be ineffective mainly because it is not clear what the focus of the query is. An optimal query reduction algorithm must thus seek to retain the useful terms for retrieval favouring recall of relevant patents, but remove terms which impair IR effectiveness. We propose a new query reduction technique decomposing a patent application into constituent text segments and computing the Language Modeling (LM) similarities by calculating the probability of generating each segment from the top ranked documents. We reduce a patent query by removing the least similar segments from the query, hypothesising that removal of these segments can increase the precision of retrieval, while still retaining the useful context to achieve high recall. Experiments on the patent prior art search collection CLEF-IP 2010 show that the proposed method outperforms standard pseudo-relevance feedback (PRF) and a naive method of query reduction based on removal of unit frequency terms (UFTs).",
"title": ""
}
] |
[
{
"docid": "3553d1dc8272bf0366b2688e5107aa3f",
"text": "The emergence of the Web 2.0 technology generated a massive amount of raw data by enabling Internet users to post their opinions, reviews, comments on the web. Processing this raw data to extract useful information can be a very challenging task. An example of important information that can be automatically extracted from the users' posts and comments is their opinions on different issues, events, services, products, etc. This problem of Sentiment Analysis (SA) has been studied well on the English language and two main approaches have been devised: corpus-based and lexicon-based. This paper addresses both approaches to SA for the Arabic language. Since there is a limited number of publically available Arabic dataset and Arabic lexicons for SA, this paper starts by building a manually annotated dataset and then takes the reader through the detailed steps of building the lexicon. Experiments are conducted throughout the different stages of this process to observe the improvements gained on the accuracy of the system and compare them to corpus-based approach.",
"title": ""
},
{
"docid": "5d4fdb1f203b3109970706a3fb9508b3",
"text": "The paper focuses on a high efficiency Zero Voltage Transition (ZVT) DC-DC full-bridge converter using a high-frequency transformer. A novel structure with recovery of energy on load side is proposed. Such solution is particularly suitable for photovoltaic applications which require low losses and cannot recovery energy on source side. Moreover, a suitable architecture of snubber circuit that permits to improve significantly the efficiency is discussed. Analysis of its operating modes and a comparison of the conversion efficiencies with conventional ZVT-PWM converters are given. Design example of a 750 W 100 kHz DC-DC converter and different PSPICE simulation tests are provided.",
"title": ""
},
{
"docid": "27029a5e18e5d874606a87f0d238cd14",
"text": "User behavior provides many cues to improve the relevance of search results through personalization. One aspect of user behavior that provides especially strong signals for delivering better relevance is an individual's history of queries and clicked documents. Previous studies have explored how short-term behavior or long-term behavior can be predictive of relevance. Ours is the first study to assess how short-term (session) behavior and long-term (historic) behavior interact, and how each may be used in isolation or in combination to optimally contribute to gains in relevance through search personalization. Our key findings include: historic behavior provides substantial benefits at the start of a search session; short-term session behavior contributes the majority of gains in an extended search session; and the combination of session and historic behavior out-performs using either alone. We also characterize how the relative contribution of each model changes throughout the duration of a session. Our findings have implications for the design of search systems that leverage user behavior to personalize the search experience.",
"title": ""
},
{
"docid": "1397a3996f2283ff718512af5b9a6294",
"text": "Two experiments showed that framing an athletic task as diagnostic of negative racial stereotypes about Black or White athletes can impede their performance in sports. In Experiment 1, Black participants performed significantly worse than did control participants when performance on a golf task was framed as diagnostic of \"sports intelligence.\" In comparison, White participants performed worse than did control participants when the golf task was framed as diagnostic of \"natural athletic ability.\" Experiment 2 observed the effect of stereotype threat on the athletic performance of White participants for whom performance in sports represented a significant measure of their self-worth. The implications of the findings for the theory of stereotype threat (C. M. Steele, 1997) and for participation in sports are discussed.",
"title": ""
},
{
"docid": "af4106bc4051e01146101aeb58a4261f",
"text": "In recent years a great amount of research has focused on algorithms that learn features from unlabeled data. In this work we propose a model based on the Self-Organizing Map (SOM) neural network to learn features useful for the problem of automatic natural images classification. In particular we use the SOM model to learn single-layer features from the extremely challenging CIFAR-10 dataset, containing 60.000 tiny labeled natural images, and subsequently use these features with a pyramidal histogram encoding to train a linear SVM classifier. Despite the large number of images, the proposed feature learning method requires only few minutes on an entry-level system, however we show that a supervised classifier trained with learned features provides significantly better results than using raw pixels values or other handcrafted features designed specifically for image classification. Moreover, exploiting the topological property of the SOM neural network, it is possible to reduce the number of features and speed up the supervised training process combining topologically close neurons, without repeating the feature learning process.",
"title": ""
},
{
"docid": "498d27f4aaf9249f6f1d6a6ae5554d0e",
"text": "Association rules are ”if-then rules” with two measures which quantify the support and confidence of the rule for a given data set. Having their origin in market basked analysis, association rules are now one of the most popular tools in data mining. This popularity is to a large part due to the availability of efficient algorithms. The first and arguably most influential algorithm for efficient association rule discovery is Apriori. In the following we will review basic concepts of association rule discovery including support, confidence, the apriori property, constraints and parallel algorithms. The core consists of a review of the most important algorithms for association rule discovery. Some familiarity with concepts like predicates, probability, expectation and random variables is assumed.",
"title": ""
},
{
"docid": "bb24e185c02dd096ba12654392181774",
"text": "The authors examined 2 ways reward might increase creativity. First, reward contingent on creativity might increase extrinsic motivation. Studies 1 and 2 found that repeatedly giving preadolescent students reward for creative performance in 1 task increased their creativity in subsequent tasks. Study 3 reported that reward promised for creativity increased college students' creative task performance. Second, expected reward for high performance might increase creativity by enhancing perceived self-determination and, therefore, intrinsic task interest. Study 4 found that employees' intrinsic job interest mediated a positive relationship between expected reward for high performance and creative suggestions offered at work. Study 5 found that employees' perceived self-determination mediated a positive relationship between expected reward for high performance and the creativity of anonymous suggestions for helping the organization.",
"title": ""
},
{
"docid": "482bc3d151948bad9fbfa02519fbe61a",
"text": "Evolution has resulted in highly developed abilities in many natural intelligences to quickly and accurately predict mechanical phenomena. Humans have successfully developed laws of physics to abstract and model such mechanical phenomena. In the context of artificial intelligence, a recent line of work has focused on estimating physical parameters based on sensory data and use them in physical simulators to make long-term predictions. In contrast, we investigate the effectiveness of a single neural network for end-to-end long-term prediction of mechanical phenomena. Based on extensive evaluation, we demonstrate that such networks can outperform alternate approaches having even access to ground-truth physical simulators, especially when some physical parameters are unobserved or not known a-priori. Further, our network outputs a distribution of outcomes to capture the inherent uncertainty in the data. Our approach demonstrates for the first time the possibility of making actionable long-term predictions from sensor data without requiring to explicitly model the underlying physical laws.",
"title": ""
},
{
"docid": "1ada0fc6b22bba07d9baf4ccab437671",
"text": "Tree-based path planners have been shown to be well suited to solve various high dimensional motion planning problems. Here we present a variant of the Rapidly-Exploring Random Tree (RRT) path planning algorithm that is able to explore narrow passages or difficult areas more effectively. We show that both workspace obstacle information and C-space information can be used when deciding which direction to grow. The method includes many ways to grow the tree, some taking into account the obstacles in the environment. This planner works best in difficult areas when planning for free flying rigid or articulated robots. Indeed, whereas the standard RRT can face difficulties planning in a narrow passage, the tree based planner presented here works best in these areas",
"title": ""
},
{
"docid": "7adf452c728be4552d5588f8b3af5070",
"text": "In this paper, we conduct an empirical investigation of neural query graph ranking approaches for the task of complex question answering over knowledge graphs. We experiment with six different ranking models and propose a novel self-attention based slot matching model which exploits the inherent structure of query graphs, our logical form of choice. Our proposed model generally outperforms the other models on two QA datasets over the DBpedia knowledge graph, evaluated in different settings. In addition, we show that transfer learning from the larger of those QA datasets to the smaller dataset yields substantial improvements, effectively offsetting the general lack of training data.",
"title": ""
},
{
"docid": "e8e8869d74dd4667ceff63c8a24caa27",
"text": "We address the problem of recommending suitable jobs to people who are seeking a new job. We formulate this recommendation problem as a supervised machine learning problem. Our technique exploits all past job transitions as well as the data associated with employees and institutions to predict an employee's next job transition. We train a machine learning model using a large number of job transitions extracted from the publicly available employee profiles in the Web. Experiments show that job transitions can be accurately predicted, significantly improving over a baseline that always predicts the most frequent institution in the data.",
"title": ""
},
{
"docid": "ea49d288ffefd549f77519c90de51fbc",
"text": "Text line detection is a prerequisite procedure of mathematical formula recognition, however, many incorrectly segmented text lines are often produced due to the two-dimensional structures of mathematics when using existing segmentation methods such as Projection Profiles Cutting or white space analysis. In consequence, mathematical formula recognition is adversely affected by these incorrectly detected text lines, with errors propagating through further processes. Aimed at mathematical formula recognition, we propose a text line detection method to produce reliable line segmentation. Based on the results produced by PPC, a learning based merging strategy is presented to combine incorrectly split text lines. In the merging strategy, the features of layout and text for a text line and those between successive lines are utilised to detect the incorrectly split text lines. Experimental results show that the proposed approach obtains good performance in detecting text lines from mathematical documents. Furthermore, the error rate in mathematical formula identification is reduced significantly through adopting the proposed text line detection method.",
"title": ""
},
{
"docid": "7b7e7db68753dc40fce611ce06dc7c74",
"text": "Ontology learning is the process of acquiring (constructing or integrating) an ontology (semi-) automatically. Being a knowledge acquisition task, it is a complex activity, which becomes even more complex in the context of the BOEMIE project, due to the management of multimedia resources and the multi-modal semantic interpretation that they require. The purpose of this chapter is to present a survey of the most relevant methods, techniques and tools used for the task of ontology learning. Adopting a practical perspective, an overview of the main activities involved in ontology learning is presented. This breakdown of the learning process is used as a basis for the comparative analysis of existing tools and approaches. The comparison is done along dimensions that emphasize the particular interests of the BOEMIE project. In this context, ontology learning in BOEMIE is treated and compared to the state of the art, explaining how BOEMIE addresses problems observed in existing systems and contributes to issues that are not frequently considered by existing approaches.",
"title": ""
},
{
"docid": "1045117f9e6e204ff51ef67a1aff031f",
"text": "Application of models to data is fraught. Data-generating collaborators often only have a very basic understanding of the complications of collating, processing and curating data. Challenges include: poor data collection practices, missing values, inconvenient storage mechanisms, intellectual property, security and privacy. All these aspects obstruct the sharing and interconnection of data, and the eventual interpretation of data through machine learning or other approaches. In project reporting, a major challenge is in encapsulating these problems and enabling goals to be built around the processing of data. Project overruns can occur due to failure to account for the amount of time required to curate and collate. But to understand these failures we need to have a common language for assessing the readiness of a particular data set. This position paper proposes the use of data readiness levels: it gives a rough outline of three stages of data preparedness and speculates on how formalisation of these levels into a common language for data readiness could facilitate project management.",
"title": ""
},
{
"docid": "fd2e7025271565927f43784f0c69c3fb",
"text": "In this paper, we have proposed a fingerprint orientation model based on 2D Fourier expansions (FOMFE) in the phase plane. The FOMFE does not require prior knowledge of singular points (SPs). It is able to describe the overall ridge topology seamlessly, including the SP regions, even for noisy fingerprints. Our statistical experiments on a public database show that the proposed FOMFE can significantly improve the accuracy of fingerprint feature extraction and thus that of fingerprint matching. Moreover, the FOMFE has a low-computational cost and can work very efficiently on large fingerprint databases. The FOMFE provides a comprehensive description for orientation features, which has enabled its beneficial use in feature-related applications such as fingerprint indexing. Unlike most indexing schemes using raw orientation data, we exploit FOMFE model coefficients to generate the feature vector. Our indexing experiments show remarkable results using different fingerprint databases",
"title": ""
},
{
"docid": "35ae4e59fd277d57c2746dfccf9b26b0",
"text": "In the field of saliency detection, many graph-based algorithms heavily depend on the accuracy of the pre-processed superpixel segmentation, which leads to significant sacrifice of detail information from the input image. In this paper, we propose a novel bottom-up saliency detection approach that takes advantage of both region-based features and image details. To provide more accurate saliency estimations, we first optimize the image boundary selection by the proposed erroneous boundary removal. By taking the image details and region-based estimations into account, we then propose the regularized random walks ranking to formulate pixel-wised saliency maps from the superpixel-based background and foreground saliency estimations. Experiment results on two public datasets indicate the significantly improved accuracy and robustness of the proposed algorithm in comparison with 12 state-of-the-art saliency detection approaches.",
"title": ""
},
{
"docid": "ebd8e2cfc51e78fbf6772128d8e4e479",
"text": "This paper uses delaying functions, functions that require signiicant calculation time, in the development of a one-pass lottery scheme in which winners are chosen fairly using only internal information. Since all this information may be published (even before the lottery closes), anyone can do the calculation and therefore verify that the winner was chosen correctly. Since the calculation uses a delaying function, ticket purchasers cannot take advantage of this information. Fraud on the part of the lottery agent is detectable and no single ticket purchaser needs to be trusted. Coalitions of purchasers attempting to control the winning ticket calculation are either unsuccessful or are detected. The scheme can be made resistant to coalitions of arbitrary size. Since we assume that coalitions of larger size are harder to assemble, the probability that the lottery is fair can be made arbitrarily high. The paper deenes delaying functions and contrasts them with pricing functions 8] and time-lock puzzles 16].",
"title": ""
},
{
"docid": "c8e4450de63dc54b5802566d589d4cdc",
"text": "BACKGROUND\nMore than 1.5 million Americans have Parkinson disease (PD), and this figure is expected to rise as the population ages. However, the dental literature offers little information about the illness.\n\n\nTYPES OF STUDIES REVIEWED\nThe authors conducted a MEDLINE search using the key terms \"Parkinson's disease,\" \"medical management\" and \"dentistry.\" They selected contemporaneous articles published in peer-reviewed journals and gave preference to articles reporting randomized controlled trials.\n\n\nRESULTS\nPD is a progressive neurodegenerative disorder caused by loss of dopaminergic and nondopaminergic neurons in the brain. These deficits result in tremor, slowness of movement, rigidity, postural instability and autonomic and behavioral dysfunction. Treatment consists of administering medications that replace dopamine, stimulate dopamine receptors and modulate other neurotransmitter systems.\n\n\nCLINICAL IMPLICATIONS\nOral health may decline because of tremors, muscle rigidity and cognitive deficits. The dentist should consult with the patient's physician to establish the patient's competence to provide informed consent and to determine the presence of comorbid illnesses. Scheduling short morning appointments that begin 90 minutes after administration of PD medication enhances the patient's ability to cooperate with care. Inclination of the dental chair at 45 degrees, placement of a bite prop, use of a rubber dam and high-volume oral evacuation enhance airway protection. To avoid adverse drug interactions with levodopa and entacapone, the dentist should limit administration of local anesthetic agents to three cartridges of 2 percent lidocaine with 1:100,000 epinephrine per half hour, and patients receiving selegiline should not be given agents containing epinephrine or levonordefrin. The dentist should instruct the patient and the caregiver in good oral hygiene techniques.",
"title": ""
},
{
"docid": "05e8879a48e3a9808ec74b5bf225c562",
"text": "Although peribronchial lymphatic drainage of the lung has been well characterized, lymphatic drainage in the visceral pleura is less well understood. The objective of the present study was to evaluate the lymphatic drainage of lung segments in the visceral pleura. Adult, European cadavers were examined. Cadavers with a history of pleural or pulmonary disease were excluded. The cadavers had been refrigerated but not embalmed. The lungs were surgically removed and re-warmed. Blue dye was injected into the subpleural area and into the first draining visceral pleural lymphatic vessel of each lung segment. Twenty-one cadavers (7 males and 14 females; mean age 80.9 years) were dissected an average of 9.8 day postmortem. A total of 380 dye injections (in 95 lobes) were performed. Lymphatic drainage of the visceral pleura followed a segmental pathway in 44.2% of the injections (n = 168) and an intersegmental pathway in 55.8% (n = 212). Drainage was found to be both intersegmental and interlobar in 2.6% of the injections (n = 10). Lymphatic drainage in the visceral pleura followed an intersegmental pathway in 22.8% (n = 13) of right upper lobe injections, 57.9% (n = 22) of right middle lobe injections, 83.3% (n = 75) of right lower lobe injections, 21% (n = 21) of left upper lobe injections, and 85.3% (n = 81) of left lower lobe injections. In the lung, lymphatic drainage in the visceral pleura appears to be more intersegmental than the peribronchial pathway is—especially in the lower lobes. The involvement of intersegmental lymphatic drainage in the visceral pleura should now be evaluated during pulmonary resections (and especially sub-lobar resections) for lung cancer.",
"title": ""
},
{
"docid": "db04a402e0c7d93afdaf34c0d55ded9a",
"text": " Drowsiness and increased tendency to fall asleep during daytime is still a generally underestimated problem. An increased tendency to fall asleep limits the efficiency at work and substantially increases the risk of accidents. Reduced alertness is difficult to assess, particularly under real life settings. Most of the available measuring procedures are laboratory-oriented and their applicability under field conditions is limited; their validity and sensitivity are often a matter of controversy. The spontaneous eye blink is considered to be a suitable ocular indicator for fatigue diagnostics. To evaluate eye blink parameters as a drowsiness indicator, a contact-free method for the measurement of spontaneous eye blinks was developed. An infrared sensor clipped to an eyeglass frame records eyelid movements continuously. In a series of sessions with 60 healthy adult participants, the validity of spontaneous blink parameters was investigated. The subjective state was determined by means of questionnaires immediately before the recording of eye blinks. The results show that several parameters of the spontaneous eye blink can be used as indicators in fatigue diagnostics. The parameters blink duration and reopening time in particular change reliably with increasing drowsiness. Furthermore, the proportion of long closure duration blinks proves to be an informative parameter. The results demonstrate that the measurement of eye blink parameters provides reliable information about drowsiness/sleepiness, which may also be applied to the continuous monitoring of the tendency to fall asleep.",
"title": ""
}
] |
scidocsrr
|
5c0abbfca7d7300f5c954314f733aa0d
|
Maturity assessment models : a design science research approach
|
[
{
"docid": "7ca5eac9be1ba8c1738862f24dd707d2",
"text": "This essay develops the philosophical foundations for design research in the Technology of Information Systems (TIS). Traditional writings on philosophy of science cannot fully describe this mode of research, which dares to intervene and improve to realize alternative futures instead of explaining or interpreting the past to discover truth. Accordingly, in addition to philosophy of science, the essay draws on writings about the act of designing, philosophy of technology and the substantive (IS) discipline. I define design research in TIS as in(ter)vention in the representational world defined by the hierarchy of concerns following semiotics. The complementary nature of the representational (internal) and real (external) environments provides the basis to articulate the dual ontological and epistemological bases. Understanding design research in TIS in this manner suggests operational principles in the internal world as the form of knowledge created by design researchers, and artifacts that embody these are seen as situated instantiations of normative theories that affect the external phenomena of interest. Throughout the paper, multiple examples illustrate the arguments. Finally, I position the resulting ‘method’ for design research vis-à-vis existing research methods and argue for its legitimacy as a viable candidate for research in the IS discipline.",
"title": ""
}
] |
[
{
"docid": "09ee1b6d80facc1c21248e855f17a17d",
"text": "AIM\nTo examine the relationship between calf circumference and muscle mass, and to evaluate the suitability of calf circumference as a surrogate marker of muscle mass for the diagnosis of sarcopenia among middle-aged and older Japanese men and women.\n\n\nMETHODS\nA total of 526 adults aged 40-89 years participated in the present cross-sectional study. The maximum calf circumference was measured in a standing position. Appendicular skeletal muscle mass was measured using dual-energy X-ray absorptiometry, and the skeletal muscle index was calculated as appendicular skeletal muscle mass divided by the square of the height (kg/m(2)). The cut-off values for sarcopenia were defined as a skeletal muscle index of less than -2 standard deviations of the mean value for Japanese young adults, as defined previously.\n\n\nRESULTS\nCalf circumference was positively correlated with appendicular skeletal muscle (r = 0.81 in men, r = 0.73 in women) and skeletal muscle index (r = 0.80 in men, r = 0.69 in women). In receiver operating characteristic analysis, the optimal calf circumference cut-off values for predicting sarcopenia were 34 cm (sensitivity 88%, specificity 91%) in men and 33 cm (sensitivity 76%, specificity 73%) in women.\n\n\nCONCLUSIONS\nCalf circumference was positively correlated with appendicular skeletal muscle mass and skeletal muscle index, and could be used as a surrogate marker of muscle mass for diagnosing sarcopenia. The suggested cut-off values of calf circumference for predicting low muscle mass are <34 cm in men and <33 cm in women.",
"title": ""
},
{
"docid": "f26cc4afade8625576ff631e1ff4f3b4",
"text": "Electromigration and voltage drop (IR-drop) are two major reliability issues in modern IC design. Electromigration gradually creates permanently open or short circuits due to excessive current densities; IR-drop causes insufficient power supply, thus degrading performance or even inducing functional errors because of nonzero wire resistance. Both types of failure can be triggered by insufficient wire widths. Although expanding the wire width alleviates electromigration and IR-drop, unlimited expansion not only increases the routing cost, but may also be infeasible due to the limited routing resource. In addition, electromigration and IR-drop manifest mainly in the power/ground (P/G) network. Therefore, taking wire widths into consideration is desirable to prevent electromigration and IR-drop at P/G routing. Unlike mature digital IC designs, P/G routing in analog ICs has not yet been well studied. In a conventional design, analog designers manually route P/G networks by implementing greedy strategies. However, the growing scale of analog ICs renders manual routing inefficient, and the greedy strategies may be ineffective when electromigration and IR-drop are considered. This study distances itself from conventional manual design and proposes an automatic analog P/G router that considers electromigration and IR-drops. First, employing transportation formulation, this article constructs an electromigration-aware rectilinear Steiner tree with the minimum routing cost. Second, without changing the solution quality, wires are bundled to release routing space for enhancing routability and relaxing congestion. A wire width extension method is subsequently adopted to reduce wire resistance for IR-drop safety. Compared with high-tech designs, the proposed approach achieves equally optimal solutions for electromigration avoidance, with superior efficiencies. Furthermore, via industrial design, experimental results also show the effectiveness and efficiency of the proposed algorithm for electromigration prevention and IR-drop reduction.",
"title": ""
},
{
"docid": "22a3d3ac774a5da4f165e90edcbd1666",
"text": "One of the difficulties of neural machine translation (NMT) is the recall and appropriate translation of low-frequency words or phrases. In this paper, we propose a simple, fast, and effective method for recalling previously seen translation examples and incorporating them into the NMT decoding process. Specifically, for an input sentence, we use a search engine to retrieve sentence pairs whose source sides are similar with the input sentence, and then collect n-grams that are both in the retrieved target sentences and aligned with words that match in the source sentences, which we call “translation pieces”. We compute pseudoprobabilities for each retrieved sentence based on similarities between the input sentence and the retrieved source sentences, and use these to weight the retrieved translation pieces. Finally, an existing NMT model is used to translate the input sentence, with an additional bonus given to outputs that contain the collected translation pieces. We show our method improves NMT translation results up to 6 BLEU points on three narrow domain translation tasks where repetitiveness of the target sentences is particularly salient. It also causes little increase in the translation time, and compares favorably to another alternative retrievalbased method with respect to accuracy, speed, and simplicity of implementation.",
"title": ""
},
{
"docid": "1819af3b3d96c182b7ea8a0e89ba5bbe",
"text": "The fingerprint is one of the oldest and most widely used biometric modality for person identification. Existing automatic fingerprint matching systems perform well when the same sensor is used for both enrollment and verification (regular matching). However, their performance significantly deteriorates when different sensors are used (cross-matching, fingerprint sensor interoperability problem). We propose an automatic fingerprint verification method to solve this problem. It was observed that the discriminative characteristics among fingerprints captured with sensors of different technology and interaction types are ridge orientations, minutiae, and local multi-scale ridge structures around minutiae. To encode this information, we propose two minutiae-based descriptors: histograms of gradients obtained using a bank of Gabor filters and binary gradient pattern descriptors, which encode multi-scale local ridge patterns around minutiae. In addition, an orientation descriptor is proposed, which compensates for the spurious and missing minutiae problem. The scores from the three descriptors are fused using a weighted sum rule, which scales each score according to its verification performance. Extensive experiments were conducted using two public domain benchmark databases (FingerPass and Multi-Sensor Optical and Latent Fingerprint) to show the effectiveness of the proposed system. The results showed that the proposed system significantly outperforms the state-of-the-art methods based on minutia cylinder-code (MCC), MCC with scale, VeriFinger—a commercial SDK, and a thin-plate spline model.",
"title": ""
},
{
"docid": "8573ad563268d5301b38c161c67b2a87",
"text": "A fracture theory for a heterogeneous aggregate material which exhibits a gradual strainsoftening due to microcracking and contains aggregate pieces that are not necessarily small compared to struttural dimensions is developed. Only Mode I is considered. The fracture is modeled as a blunt smeared crack band, which is justified by the random nature of the microstructure. Simple triaxial stress-strain relations which model the strain-softening and describe the effect of gradual microcracking in the crack band are derived. It is shown that it is easier to use compliance rather than stiffness matrices and that it suffices to adjust a single diagonal term of the compliance matrix. The limiting case of this matrix for complete (continuous) cracking is shown to be identical to the inverse of the well-known stiffness matrix for a perfectly cracked material. The material fracture properties are characterized by only three paPlameters -fracture energy, uniaxial strength limit and width of the crack band (fracture Process zone), while the strain-softening modulus is a function of these parameters. A m~thod of determining the fracture energy from measured complete stressstrain relations is' also given. Triaxial stress effects on fracture can be taken into account. The theory is verljied by comparisons with numerous experimental data from the literature. Satisfactory fits of maximum load data as well as resistance curves are achieved and values of the three matetial parameters involved, namely the fracture energy, the strength, and the width of crack b~nd front, are determined from test data. The optimum value of the latter width is found to be about 3 aggregate sizes, which is also justified as the minimum acceptable for a homogeneous continuum modeling. The method of implementing the theory in a finite element code is al$o indicated, and rules for achieving objectivity of results with regard to the analyst's choice of element size are given. Finally, a simple formula is derived to predict from the tensile strength and aggregate size the fracture energy, as well as the strain-softening modulus. A statistical analysis of the errors reveals a drastic improvement compared to the linear fracture th~ory as well as the strength theory. The applicability of fracture mechanics to concrete is thz4 solidly established.",
"title": ""
},
{
"docid": "172e46f40cc459d0ba8033fead3f35b3",
"text": "Given an arbitrary mesh, we present a method to construct a progressive mesh (PM) such that all meshes in the PM sequence share a common texture parametrization. Our method considers two important goals simultaneously. It minimizes texture stretch (small texture distances mapped onto large surface distances) to balance sampling rates over all locations and directions on the surface. It also minimizes texture deviation (“slippage” error based on parametric correspondence) to obtain accurate textured mesh approximations. The method begins by partitioning the mesh into charts using planarity and compactness heuristics. It creates a stretch-minimizing parametrization within each chart, and resizes the charts based on the resulting stretch. Next, it simplifies the mesh while respecting the chart boundaries. The parametrization is re-optimized to reduce both stretch and deviation over the whole PM sequence. Finally, the charts are packed into a texture atlas. We demonstrate using such atlases to sample color and normal maps over several models.",
"title": ""
},
{
"docid": "ac6410d8891491d050b32619dc2bdd50",
"text": "Due to the increase of generation sources in distribution networks, it is becoming very complex to develop and maintain models of these networks. Network operators need to determine reduced models of distribution networks to be used in grid management functions. This paper presents a novel method that synthesizes steady-state models of unbalanced active distribution networks with the use of dynamic measurements (time series) from phasor measurement units (PMUs). Since phasor measurement unit (PMU) measurements may contain errors and bad data, this paper presents the application of a Kalman filter technique for real-time data processing. In addition, PMU data capture the power system's response at different time-scales, which are generated by different types of power system events; the presented Kalman filter has been improved to extract the steady-state component of the PMU measurements to be fed to the steady-state model synthesis application. Performance of the proposed methods has been assessed by real-time hardware-in-the-loop simulations on a sample distribution network.",
"title": ""
},
{
"docid": "c0f1d62b1d1e519f60200e2df7e58833",
"text": "Domain name systems and certificate authority systems may have security and trust problems in their implementation. This article summarizes how these systems work and what the implementation problems may be. There are blockchain-based decentralized solutions that claim to overcome those problems. We provide a brief explanation on how blockchain systems work, and their strengths are explained. DNS security challenges are given. Blockchain-based DNS solutions are classified and described in detail according to their services. The advantages and feasibility of these implementations are discussed. Last but not least, the possibility of the decentralized Internet is questioned.",
"title": ""
},
{
"docid": "92600ef3d90d5289f70b10ccedff7a81",
"text": "In this paper, the chicken farm monitoring system is proposed and developed based on wireless communication unit to transfer data by using the wireless module combined with the sensors that enable to detect temperature, humidity, light and water level values. This system is focused on the collecting, storing, and controlling the information of the chicken farm so that the high quality and quantity of the meal production can be produced. This system is developed to solve several problems in the chicken farm which are many human workers is needed to control the farm, high cost in maintenance, and inaccurate data collected at one point. The proposed methodology really helps in finishing this project within the period given. Based on the research that has been carried out, the system that can monitor and control environment condition (temperature, humidity, and light) has been developed by using the Arduino microcontroller. This system also is able to collect data and operate autonomously.",
"title": ""
},
{
"docid": "91d59b5e08c711e25d83785c198d9ae1",
"text": "The increase in the wireless users has led to the spectrum shortage problem. Federal Communication Commission (FCC) showed that licensed spectrum bands are underutilized, specially TV bands. The IEEE 802.22 standard was proposed to exploit these white spaces in the (TV) frequency spectrum. Cognitive Radio allows unlicensed users to use licensed bands while safeguarding the priority of licensed users. Cognitive Radio is composed of two types of users, licensed users also known as Primary Users(PUs) and unlicensed users also known as Secondary Users(SUs).SUs use the resources when spectrum allocated to PU is vacant, as soon as PU become active, the SU has to leave the channel for PU. Hence the opportunistic access is provided by CR to SUs whenever the channel is vacant. Cognitive Users sense the spectrum continuously and share this sensing information to other SUs, during this spectrum sensing, the network is vulnerable to so many attacks. One of these attacks is Primary User Emulation Attack (PUEA), in which the malicious secondary users can mimic the characteristics of primary users thereby causing legitimate SUs to erroneously identify the attacker as a primary user, and to gain access to wireless channels. PUEA is of two types: Selfish and Malicious attacker. A selfish attacker aims in stealing Bandwidth form legitimate SUs for its own transmissions while malicious attacker mimic the characteristics of PU.",
"title": ""
},
{
"docid": "b2d256cd40e67e3eadd3f5d613ad32fa",
"text": "Due to the wide spread of cloud computing, arises actual question about architecture, design and implementation of cloud applications. The microservice model describes the design and development of loosely coupled cloud applications when computing resources are provided on the basis of automated IaaS and PaaS cloud platforms. Such applications consist of hundreds and thousands of service instances, so automated validation and testing of cloud applications developed on the basis of microservice model is a pressing issue. There are constantly developing new methods of testing both individual microservices and cloud applications at a whole. This article presents our vision of a framework for the validation of the microservice cloud applications, providing an integrated approach for the implementation of various testing methods of such applications, from basic unit tests to continuous stability testing.",
"title": ""
},
{
"docid": "322f6321bc34750344064d474206fddb",
"text": "BACKGROUND AND PURPOSE\nThis study was undertaken to elucidate whether and how age influences stroke outcome.\n\n\nMETHODS\nThis prospective and community-based study comprised 515 consecutive acute stroke patients. Computed tomographic scan was performed in 79% of patients. Activities of daily living (ADL) and neurological status were assessed weekly during hospital stay using the Barthel Index (BI) and the Scandinavian Stroke Scale (SSS), respectively. Information regarding social condition and comorbidity before stroke was also registered. A multiple regression model was used to analyze the independent influence of age on stroke outcome.\n\n\nRESULTS\nAge was not related to the type of stroke lesion or infarct size. However, age independently influenced initial BI (-4 points per 10 years, P < .01), initial SSS (-2 points per 10 years, P = .01), and discharge BI (-3 points per 10 years, P < .01). No independent influence of age was found regarding mortality within 3 months, discharge SSS, length of hospital stay, and discharge placement. ADL improvement was influenced independently by age (-3 points per 10 years, P < .01), whereas age had no influence on neurological improvement or on speed of recovery.\n\n\nCONCLUSIONS\nAge independently influences stroke outcome selectively in ADL-related aspects (BI) but not in neurological aspects (SSS), suggesting a poorer compensatory ability in elderly stroke patients. Therefore, rehabilitation of elderly stroke patients should be focused more on ADL and compensation rather than on the recovery of neurological status, and age itself should not be a selection criterion for rehabilitation.",
"title": ""
},
{
"docid": "0c67afcb351c53c1b9e2b4bcf3b0dc08",
"text": "The Scrum methodology is an agile software development process that works as a project management wrapper around existing engineering practices to iteratively and incrementally develop software. With Scrum, for a developer to receive credit for his or her work, he or she must demonstrate the new functionality provided by a feature at the end of each short iteration during an iteration review session. Such a short-term focus without the checks and balances of sound engineering practices may lead a team to neglect quality. In this paper we present the experiences of three teams at Microsoft using Scrum with an additional nine sound engineering practices. Our results indicate that these teams were able to improve quality, productivity, and estimation accuracy through the combination of Scrum and nine engineering practices.",
"title": ""
},
{
"docid": "9f0206aca2f3cccfb2ca1df629c32c7a",
"text": "Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that \"All models are wrong but some are useful.\" We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a \"do it yourself kit\" for explanations, allowing a practitioner to directly answer \"what if questions\" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.",
"title": ""
},
{
"docid": "58b7fa3dade7f95457d794addf8c7ae1",
"text": "Synchronic and Diachronic Dutch Books are used to justify the use of probability measures to quantify the beliefs held by a rational agent. The argument has been used to reject any non-Bayesian representation of degrees of beliefs. We show that the transferable belief model resists the criticism even though it is not a Bayesian model. We analyze the ‘Peter, Paul and Mary’ example and show how it resists to Dutch Books.",
"title": ""
},
{
"docid": "32b8f971302926fd75f418df0aef91a3",
"text": "Cartoon-to-photo facial translation could be widely used in different applications, such as law enforcement and anime remaking. Nevertheless, current general-purpose imageto-image models usually produce blurry or unrelated results in this task. In this paper, we propose a Cartoon-to-Photo facial translation with Generative Adversarial Networks (CP-GAN) for inverting cartoon faces to generate photo-realistic and related face images. In order to produce convincing faces with intact facial parts, we exploit global and local discriminators to capture global facial features and three local facial regions, respectively. Moreover, we use a specific content network to capture and preserve face characteristic and identity between cartoons and photos. As a result, the proposed approach can generate convincing high-quality faces that satisfy both the characteristic and identity constraints of input cartoon faces. Compared with recent works on unpaired image-to-image translation, our proposed method is able to generate more realistic and correlative images.",
"title": ""
},
{
"docid": "760a303502d732ece14e3ea35c0c6297",
"text": "Data centers are experiencing a remarkable growth in the number of interconnected servers. Being one of the foremost data center design concerns, network infrastructure plays a pivotal role in the initial capital investment and ascertaining the performance parameters for the data center. Legacy data center network (DCN) infrastructure lacks the inherent capability to meet the data centers growth trend and aggregate bandwidth demands. Deployment of even the highest-end enterprise network equipment only delivers around 50% of the aggregate bandwidth at the edge of network. The vital challenges faced by the legacy DCN architecture trigger the need for new DCN architectures, to accommodate the growing demands of the ‘cloud computing’ paradigm. We have implemented and simulated the state of the art DCN models in this paper, namely: (a) legacy DCN architecture, (b) switch-based, and (c) hybrid models, and compared their effectiveness by monitoring the network: (a) throughput and (b) average packet delay. The presented analysis may be perceived as a background benchmarking study for the further research on the simulation and implementation of the DCN-customized topologies and customized addressing protocols in the large-scale data centers. We have performed extensive simulations under various network traffic patterns to ascertain the strengths and inadequacies of the different DCN architectures. Moreover, we provide a firm foundation for further research and enhancement in DCN architectures. Copyright © 2012 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "938f49e103d0153c82819becf96f126c",
"text": "Humans interpret texts with respect to some background information, or world knowledge, and we would like to develop automatic reading comprehension systems that can do the same. In this paper, we introduce a task and several models to drive progress towards this goal. In particular, we propose the task of rare entity prediction: given a web document with several entities removed, models are tasked with predicting the correct missing entities conditioned on the document context and the lexical resources. This task is challenging due to the diversity of language styles and the extremely large number of rare entities. We propose two recurrent neural network architectures which make use of external knowledge in the form of entity descriptions. Our experiments show that our hierarchical LSTM model performs significantly better at the rare entity prediction task than those that do not make use of external resources.",
"title": ""
},
{
"docid": "dc93d2204ff27c7d55a71e75d2ae4ca9",
"text": "Locating and securing an Alzheimer's patient who is outdoors and in wandering state is crucial to patient's safety. Although advances in geotracking and mobile technology have made locating patients instantly possible, reaching them while in wandering state may take time. However, a social network of caregivers may help shorten the time that it takes to reach and secure a wandering AD patient. This study proposes a new type of intervention based on novel mobile application architecture to form and direct a social support network of caregivers for locating and securing wandering patients as soon as possible. System employs, aside from the conventional tracking mechanism, a wandering detection mechanism, both of which operates through a tracking device installed a Subscriber Identity Module for Global System for Mobile Communications Network(GSM). System components are being implemented using Java. Family caregivers will be interviewed prior to and after the use of the system and Center For Epidemiologic Studies Depression Scale, Patient Health Questionnaire and Zarit Burden Interview will be applied to them during these interviews to find out the impact of the system in terms of depression, anxiety and burden, respectively.",
"title": ""
},
{
"docid": "83ccee768c29428ea8a575b2e6faab7d",
"text": "Audio-based cough detection has become more pervasive in recent years because of its utility in evaluating treatments and the potential to impact the quality of life for individuals with chronic cough. We critically examine the current state of the art in cough detection, concluding that existing approaches expose private audio recordings of users and bystanders. We present a novel algorithm for detecting coughs from the audio stream of a mobile phone. Our system allows cough sounds to be reconstructed from the feature set, but prevents speech from being reconstructed intelligibly. We evaluate our algorithm on data collected in the wild and report an average true positive rate of 92% and false positive rate of 0.5%. We also present the results of two psychoacoustic experiments which characterize the tradeoff between the fidelity of reconstructed cough sounds and the intelligibility of reconstructed speech.",
"title": ""
}
] |
scidocsrr
|
f650a963b81accebef2c80e08e89931e
|
Institutionalization of IT Compliance: A Longitudinal Study
|
[
{
"docid": "69be35016630139445f693fd8beda509",
"text": "Developing information technology (IT) gov-ernance structures within an organization has always been challenging. This is particularly the case in organizations that have achieved growth through mergers and acquisitions. When the acquired organizations are geographically located in different regions than the host enterprise, the factors affecting this integration and the choice of IT governance structures are quite different than when this situation does not exist. This study performs an exploratory examination of the factors that affect the choice of IT governance structures in organizations that grow through mergers and acquisitions in developing countries using the results of a case study of an international telecommunications company. We find that in addition to the commonly recognized factors such as government regulation, competition and market stability, organizational culture, and IT competence, top management's predisposition toward a specific business strategy and governance structure can profoundly influence the choice of IT governance in organizations. Managerial implications are discussed.",
"title": ""
}
] |
[
{
"docid": "8d45954f6c038910586d55e9ca3ba924",
"text": "IAA produced by bacteria of the genus Azospirillum spp. can promote plant growth by stimulating root formation. Native Azospirillum spp., isolated from Irannian soils had been evaluated this ability in both qualitative and quantitative methods and registered the effects of superior ones on morphological, physiological and root growth of wheat. The roots of wheat seedling responded positively to the several bacteria inoculations by an increase in root length, dry weight and by the lateral root hairs.",
"title": ""
},
{
"docid": "c1d5df0e2058e3f191a8227fca51a2fb",
"text": "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.",
"title": ""
},
{
"docid": "9c7fbbde15c03078bce7bd8d07fa6d2a",
"text": "• For each sense sij, we create a sense embedding E(sij), again a D-dimensional vector. • The lemma embeddings can be decomposed into a mix (e.g. a convex combination) of sense vectors, for instance F(rock) = 0.3 · E(rock-1) + 0.7 · E(rock-2). The “mix variables” pij are non-negative and sum to 1 for each lemma. • The intuition of the optimization that each sense sij should be “close” to a number of other concepts, called the network neighbors, that we know are related to it, as defined by a semantic network. For instance, rock-2 might be defined by the network to be related to other types of music.",
"title": ""
},
{
"docid": "05f25a2de55907773c9ff13b8a2fe5f6",
"text": "Deep learning (DL) creates impactful advances following a virtuous recipe: model architecture search, creating large training data sets, and scaling computation. It is widely believed that growing training sets and models should improve accuracy and result in better products. As DL application domains grow, we would like a deeper understanding of the relationships between training set size, computational scale, and model accuracy improvements to advance the state-of-the-art. This paper presents a large scale empirical characterization of generalization error and model size growth as training sets grow. We introduce a methodology for this measurement and test four machine learning domains: machine translation, language modeling, image processing, and speech recognition. Our empirical results show power-law generalization error scaling across a breadth of factors, resulting in power-law exponents—the \"steepness\" of the learning curve—yet to be explained by theoretical work. Further, model improvements only shift the error but do not appear to affect the power-law exponent. We also show that model size scales sublinearly with data size. These scaling relationships have significant implications on deep learning research, practice, and systems. They can assist model debugging, setting accuracy targets, and decisions about data set growth. They can also guide computing system design and underscore the importance of continued computational scaling.",
"title": ""
},
{
"docid": "8171294a51cb3a83c43243ed96948c3d",
"text": "The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sensing (CS) due to its capability to estimate sparse support even with an insufficient number of snapshots, in which case classical array signal processing fails. However, CS guarantees the accurate recovery in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The apparent dichotomy between the probabilistic CS and deterministic sensor array signal processing has not been fully understood. The main contribution of the present article is a unified approach that revisits the link between CS and array signal processing first unveiled in the mid 1990s by Feng and Bresler. The new algorithm, which we call compressive MUSIC, identifies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we show that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimal -bound with finite number of snapshots even in cases where the signals are linearly dependent.",
"title": ""
},
{
"docid": "e0fc099ecd24d8d8e6118c01e4ed2e82",
"text": "The stated goal for visual data exploration is to operate at a rate that matches the pace of human data analysts, but the ever increasing amount of data has led to a fundamental problem: datasets are often too large to process within interactive time frames. Progressive analytics and visualizations have been proposed as potential solutions to this issue. By processing data incrementally in small chunks, progressive systems provide approximate query answers at interactive speeds that are then refined over time with increasing precision. We study how progressive visualizations affect users in exploratory settings in an experiment where we capture user behavior and knowledge discovery through interaction logs and think-aloud protocols. Our experiment includes three visualization conditions and different simulated dataset sizes. The visualization conditions are: (1) blocking, where results are displayed only after the entire dataset has been processed; (2) instantaneous, a hypothetical condition where results are shown almost immediately; and (3) progressive, where approximate results are displayed quickly and then refined over time. We analyze the data collected in our experiment and observe that users perform equally well with either instantaneous or progressive visualizations in key metrics, such as insight discovery rates and dataset coverage, while blocking visualizations have detrimental effects.",
"title": ""
},
{
"docid": "1b6af47ddb23b3927c451b8b659fb13e",
"text": "— This project presents an approach to develop a real-time hand gesture recognition enabling human-computer interaction. It is \" Vision Based \" that uses only a webcam and Computer Vision (CV) technology, such as image processing that can recognize several hand gestures. The applications of real time hand gesture recognition are numerous, due to the fact that it can be used almost anywhere where we interact with computers ranging from basic usage which involves small applications to domain-specific specialized applications. Currently, at this level our project is useful for the society but it can further be expanded to be readily used at the industrial level as well. Gesture recognition is an area of active current research in computer vision. Existing systems use hand detection primarily with some type of marker. Our system, however, uses a real-time hand image recognition system. Our system, however, uses a real-time hand image recognition without any marker, simply using bare hands. I. INTRODUCTION In today \" s computer age, every individual is dependent to perform most of their day-today tasks using computers. The major input devices one uses while operating a computer are keyboard and mouse. But there are a wide range of health problems that affects many people nowadays, caused by the constant and continuous work with the computer. Direct use of hands as an input device is an attractive method for providing natural Human Computer Interaction which has evolved from text-based interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to fully fledged multi participant Virtual Environment (VE) systems. Since hand gestures are completely natural form for communication it does not adversely affect the health of the operator as in case of excessive usage of keyboard and mouse. Imagine the human-computer interaction of the future: A 3Dapplication where you can move and rotate objects simply by moving and rotating your hand-all without touching any input device. In this paper a review of vision based hand gesture recognition is presented.",
"title": ""
},
{
"docid": "20e504a115a1448ea366eae408b6391f",
"text": "Clustering algorithms have emerged as an alternative powerful meta-learning tool to accurately analyze the massive volume of data generated by modern applications. In particular, their main goal is to categorize data into clusters such that objects are grouped in the same cluster when they are similar according to specific metrics. There is a vast body of knowledge in the area of clustering and there has been attempts to analyze and categorize them for a larger number of applications. However, one of the major issues in using clustering algorithms for big data that causes confusion amongst practitioners is the lack of consensus in the definition of their properties as well as a lack of formal categorization. With the intention of alleviating these problems, this paper introduces concepts and algorithms related to clustering, a concise survey of existing (clustering) algorithms as well as providing a comparison, both from a theoretical and an empirical perspective. From a theoretical perspective, we developed a categorizing framework based on the main properties pointed out in previous studies. Empirically, we conducted extensive experiments where we compared the most representative algorithm from each of the categories using a large number of real (big) data sets. The effectiveness of the candidate clustering algorithms is measured through a number of internal and external validity metrics, stability, runtime, and scalability tests. In addition, we highlighted the set of clustering algorithms that are the best performing for big data.",
"title": ""
},
{
"docid": "d87295095ef11648890b19cd0608d5da",
"text": "Link prediction and recommendation is a fundamental problem in social network analysis. The key challenge of link prediction comes from the sparsity of networks due to the strong disproportion of links that they have potential to form to links that do form. Most previous work tries to solve the problem in single network, few research focus on capturing the general principles of link formation across heterogeneous networks. In this work, we give a formal definition of link recommendation across heterogeneous networks. Then we propose a ranking factor graph model (RFG) for predicting links in social networks, which effectively improves the predictive performance. Motivated by the intuition that people make friends in different networks with similar principles, we find several social patterns that are general across heterogeneous networks. With the general social patterns, we develop a transfer-based RFG model that combines them with network structure information. This model provides us insight into fundamental principles that drive the link formation and network evolution. Finally, we verify the predictive performance of the presented transfer model on 12 pairs of transfer cases. Our experimental results demonstrate that the transfer of general social patterns indeed help the prediction of links.",
"title": ""
},
{
"docid": "c02865dab28db59a22b972d570c2929a",
"text": "............................................................................................................................. iii Table of",
"title": ""
},
{
"docid": "63b210cc5e1214c51b642e9a4a2a1fb0",
"text": "This paper proposes a simplified method to compute the systolic and diastolic blood pressures from measured oscillometric blood-pressure waveforms. Therefore, the oscillometric waveform is analyzed in the frequency domain, which reveals that the measured blood-pressure signals are heavily disturbed by nonlinear contributions. The proposed approach will linearize the measured oscillometric waveform in order to obtain a more accurate and transparent estimation of the systolic and diastolic pressure based on a robust preprocessing technique. This new approach will be compared with the Korotkoff method and a commercially available noninvasive blood-pressure meter. This allows verification if the linearized approach contains as much information as the Korotkoff method in order to calculate a correct systolic and diastolic blood pressure.",
"title": ""
},
{
"docid": "5b759f2d581a8940127b5e45019039d7",
"text": "The structure of the domain name is highly relevant for providing insights into the management, organization and operation of a given enterprise. Security assessment and network penetration testing are using information sourced from the DNS service in order to map the network, perform reconnaissance tasks, identify services and target individual hosts. Tracking the domain names used by popular Botnets is another major application that needs to undercover their underlying DNS structure. Current approaches for this purpose are limited to simplistic brute force scanning or reverse DNS, but these are unreliable. Brute force attacks depend of a huge list of known words and thus, will not work against unknown names, while reverse DNS is not always setup or properly configured. In this paper, we address the issue of fast and efficient generation of DNS names and describe practical experiences against real world large scale DNS names. Our approach is based on techniques derived from natural language modeling and leverage Markov Chain Models in order to build the first DNS scanner (SDBF) that is leveraging both, training and advanced language modeling approaches.",
"title": ""
},
{
"docid": "05049ac85552c32f2c98d7249a038522",
"text": "Remote sensing tools are increasingly being used to survey forest structure. Most current methods rely on GPS signals, which are available in above-canopy surveys or in below-canopy surveys of open forests, but may be absent in below-canopy environments of dense forests. We trialled a technology that facilitates mobile surveys in GPS-denied below-canopy forest environments. The platform consists of a battery-powered UAV mounted with a LiDAR. It lacks a GPS or any other localisation device. The vehicle is capable of an 8 min flight duration and autonomous operation but was remotely piloted in the present study. We flew the UAV around a 20 m × 20 m patch of roadside trees and developed postprocessing software to estimate the diameter-at-breast-height (DBH) of 12 trees that were detected by the LiDAR. The method detected 73% of trees greater than 200 mm DBH within 3 m of the flight path. Smaller and more distant trees could not be detected reliably. The UAV-based DBH estimates of detected trees were positively correlated with the humanbased estimates (R = 0.45, p = 0.017) with a median absolute error of 18.1%, a root-meansquare error of 25.1% and a bias of −1.2%. We summarise the main current limitations of this technology and outline potential solutions. The greatest gains in precision could be achieved through use of a localisation device. The long-term factor limiting the deployment of below-canopy UAV surveys is likely to be battery technology.",
"title": ""
},
{
"docid": "ff6a2e6b0fbb4e195b095981ab97aae0",
"text": "As broadband speeds increase, latency is becoming a bottleneck for many applications—especially for Web downloads. Latency affects many aspects of Web page load time, from DNS lookups to the time to complete a three-way TCP handshake; it also contributes to the time it takes to transfer the Web objects for a page. Previous work has shown that much of this latency can occur in the last mile [2]. Although some performance bottlenecks can be mitigated by increasing downstream throughput (e.g., by purchasing a higher service plan), in many cases, latency introduces performance bottlenecks, particularly for connections with higher throughput. To mitigate latency bottlenecks in the last mile, we have implemented a system that performs DNS prefetching and TCP connection caching to the Web sites that devices inside a home visit most frequently, a technique we call popularity-based prefetching. Many devices and applications already perform DNS prefetching and maintain persistent TCP connections, but most prefetching is predictive based on the content of the page, rather than on past site popularity. We evaluate the optimizations using a simulator that we drive from traffic traces that we collected from five homes in the BISmark testbed [1]. We find that performing DNS prefetching and TCP connection caching for the twenty most popular sites inside the home can double DNS and connection cache hit rates.",
"title": ""
},
{
"docid": "7b78b138539b876660c2a320aa10cd2e",
"text": "What are the psychological, computational and neural underpinnings of language? Are these neurocognitive correlates dedicated to language? Do different parts of language depend on distinct neurocognitive systems? Here I address these and other issues that are crucial for our understanding of two fundamental language capacities: the memorization of words in the mental lexicon, and the rule-governed combination of words by the mental grammar. According to the declarative/procedural model, the mental lexicon depends on declarative memory and is rooted in the temporal lobe, whereas the mental grammar involves procedural memory and is rooted in the frontal cortex and basal ganglia. I argue that the declarative/procedural model provides a new framework for the study of lexicon and grammar.",
"title": ""
},
{
"docid": "44f41d363390f6f079f2e67067ffa36d",
"text": "The research described in this paper was supported in part by the National Science Foundation under Grants IST-g0-12418 and IST-82-10564. and in part by the Office of Naval Research under Grant N00014-80-C-0197. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100.0832 75¢",
"title": ""
},
{
"docid": "dd0319de90cd0e58a9298a62c2178b25",
"text": "The extraction of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. This paper presents a novel hybrid automatic approach for the extraction of retinal image vessels. The method consists in the application of mathematical morphology and a fuzzy clustering algorithm followed by a purification procedure. In mathematical morphology, the retinal image is smoothed and strengthened so that the blood vessels are enhanced and the background information is suppressed. The fuzzy clustering algorithm is then employed to the previous enhanced image for segmentation. After the fuzzy segmentation, a purification procedure is used to reduce the weak edges and noise, and the final results of the blood vessels are consequently achieved. The performance of the proposed method is compared with some existing segmentation methods and hand-labeled segmentations. The approach has been tested on a series of retinal images, and experimental results show that our technique is promising and effective.",
"title": ""
},
{
"docid": "eb7eb6777a68fd594e2e94ac3cba6be9",
"text": "Cellulosic plant material represents an as-of-yet untapped source of fermentable sugars for significant industrial use. Many physio-chemical structural and compositional factors hinder the enzymatic digestibility of cellulose present in lignocellulosic biomass. The goal of any pretreatment technology is to alter or remove structural and compositional impediments to hydrolysis in order to improve the rate of enzyme hydrolysis and increase yields of fermentable sugars from cellulose or hemicellulose. These methods cause physical and/or chemical changes in the plant biomass in order to achieve this result. Experimental investigation of physical changes and chemical reactions that occur during pretreatment is required for the development of effective and mechanistic models that can be used for the rational design of pretreatment processes. Furthermore, pretreatment processing conditions must be tailored to the specific chemical and structural composition of the various, and variable, sources of lignocellulosic biomass. This paper reviews process parameters and their fundamental modes of action for promising pretreatment methods.",
"title": ""
},
{
"docid": "b3d232625a70ddf1733448ad26a9a0a0",
"text": "This study aims at minimizing the acoustic noise from a magnetic origin of a claw-pole alternator. This optimization is carried out through a multiphysics simulation, which includes the computation of magnetic forces, vibrations, and the resulting noise. Therefore, a mechanical model of the alternator has to be developed to determine its main modes. Predicted modal parameters are checked against experimental results. Based on this model, the sound power level is simulated and compared with measurements. Finally, the rotor shape is optimized and a significant reduction of the noise level is found by simulation.",
"title": ""
},
{
"docid": "ce0f21b03d669b72dd954352e2c35ab1",
"text": "In this letter, a new technique is proposed for the design of a compact high-power low-pass rectangular waveguide filter with a wide spurious-free frequency behavior. Specifically, the new filter is intended for the suppression of the fundamental mode over a wide band in much higher power applications than the classical corrugated filter with the same frequency specifications. Moreover, the filter length is dramatically reduced when compared to alternative techniques previously considered.",
"title": ""
}
] |
scidocsrr
|
72d91e43d8a595b27174cb45d77d63fb
|
Computational Drug Discovery with Dyadic Positive-Unlabeled Learning
|
[
{
"docid": "4cd605375f5d27c754e4a21b81b39f1a",
"text": "The dominant paradigm in drug discovery is the concept of designing maximally selective ligands to act on individual drug targets. However, many effective drugs act via modulation of multiple proteins rather than single targets. Advances in systems biology are revealing a phenotypic robustness and a network structure that strongly suggests that exquisitely selective compounds, compared with multitarget drugs, may exhibit lower than desired clinical efficacy. This new appreciation of the role of polypharmacology has significant implications for tackling the two major sources of attrition in drug development--efficacy and toxicity. Integrating network biology and polypharmacology holds the promise of expanding the current opportunity space for druggable targets. However, the rational design of polypharmacology faces considerable challenges in the need for new methods to validate target combinations and optimize multiple structure-activity relationships while maintaining drug-like properties. Advances in these areas are creating the foundation of the next paradigm in drug discovery: network pharmacology.",
"title": ""
},
{
"docid": "67f13c2b686593398320d8273d53852f",
"text": "Drug-drug interactions (DDIs) may cause serious side-effects that draw great attention from both academia and industry. Since some DDIs are mediated by unexpected drug-human protein interactions, it is reasonable to analyze the chemical-protein interactome (CPI) profiles of the drugs to predict their DDIs. Here we introduce the DDI-CPI server, which can make real-time DDI predictions based only on molecular structure. When the user submits a molecule, the server will dock user's molecule across 611 human proteins, generating a CPI profile that can be used as a feature vector for the pre-constructed prediction model. It can suggest potential DDIs between the user's molecule and our library of 2515 drug molecules. In cross-validation and independent validation, the server achieved an AUC greater than 0.85. Additionally, by investigating the CPI profiles of predicted DDI, users can explore the PK/PD proteins that might be involved in a particular DDI. A 3D visualization of the drug-protein interaction will be provided as well. The DDI-CPI is freely accessible at http://cpi.bio-x.cn/ddi/.",
"title": ""
}
] |
[
{
"docid": "bdffdfe92df254d0b13c1a1c985c0400",
"text": "We propose a model to automatically describe changes introduced in the source code of a program using natural language. Our method receives as input a set of code commits, which contains both the modifications and message introduced by an user. These two modalities are used to train an encoder-decoder architecture. We evaluated our approach on twelve real world open source projects from four different programming languages. Quantitative and qualitative results showed that the proposed approach can generate feasible and semantically sound descriptions not only in standard in-project settings, but also in a cross-project setting.",
"title": ""
},
{
"docid": "f202e380dfd1022e77a04212394be7e1",
"text": "As usage of cloud computing increases, customers are mainly concerned about choosing cloud infrastructure with sufficient security. Concerns are greater in the multitenant environment on a public cloud. This paper addresses the security assessment of OpenStack open source cloud solution and virtual machine instances with different operating systems hosted in the cloud. The methodology and realized experiments target vulnerabilities from both inside and outside the cloud. We tested four different platforms and analyzed the security assessment. The main conclusions of the realized experiments show that multi-tenant environment raises new security challenges, there are more vulnerabilities from inside than outside and that Linux based Ubuntu, CentOS and Fedora are less vulnerable than Windows. We discuss details about these vulnerabilities and show how they can be solved by appropriate patches and other solutions. Keywords-Cloud Computing; Security Assessment; Virtualization.",
"title": ""
},
{
"docid": "71a9394d995cefb8027bed3c56ec176c",
"text": "A broadband microstrip-fed printed antenna is proposed for phased antenna array systems. The antenna consists of two parallel-modified dipoles of different lengths. The regular dipole shape is modified to a quasi-rhombus shape by adding two triangular patches. Using two dipoles helps maintain stable radiation patterns close to their resonance frequencies. A modified array configuration is proposed to further enhance the antenna radiation characteristics and usable bandwidth. Scanning capabilities are studied for a four-element array. The proposed antenna provides endfire radiation patterns with high gain, high front-to-back (F-to-B) ratio, low cross-polarization level, wide beamwidth, and wide scanning angles in a wide bandwidth of 103%",
"title": ""
},
{
"docid": "4405611eafc1f6df4c4fa0b60a50f90d",
"text": "Balancing robot which is proposed in this paper is a robot that relies on two wheels in the process of movement. Unlike the other mobile robot which is mechanically stable in its standing position, balancing robot need a balancing control which requires an angle value to be used as tilt feedback. The balancing control will control the robot, so it can maintain its standing position. Beside the balancing control itself, the movement of balancing robot needs its own control in order to control the movement while keeping the robot balanced. Both controllers will be combined since will both of them control the same wheel as the actuator. In this paper we proposed a cascaded PID control algorithm to combine the balancing and movement or distance controller. The movement of the robot is controlled using a distance controller that use rotary encoder sensor to measure its traveled distance. The experiment shows that the robot is able to climb up on 30 degree sloping board. By cascading the distance control to the balancing control, the robot is able to move forward, turning, and reach the desired position by calculating the body's tilt angle.",
"title": ""
},
{
"docid": "5090070d6d928b83bd22d380f162b0a6",
"text": "The Federal Aviation Administration (FAA) has been increasing the National Airspace System (NAS) capacity to accommodate the predicted rapid growth of air traffic. One method to increase the capacity is reducing air traffic controller workload so that they can handle more air traffic. It is crucial to measure the impact of the increasing future air traffic on controller workload. Our experimental data show a linear relationship between the number of aircraft in the en route center sector and controllers’ perceived workload. Based on the extensive range of aircraft count from 14 to 38 in the experiment, we can predict en route center controllers working as a team of Radar and Data controllers with the automation tools available in the our experiment could handle up to about 28 aircraft. This is 33% more than the 21 aircraft that en route center controllers typically handle in a busy sector.",
"title": ""
},
{
"docid": "2bf0219394d87654d2824c805844fcaa",
"text": "Wei-yu Kevin Chiang • Dilip Chhajed • James D. Hess Department of Information Systems, University of Maryland at Baltimore County, Baltimore, Maryland 21250 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 [email protected] • [email protected] • [email protected]",
"title": ""
},
{
"docid": "7cb58462e6388a67376f5f0e95f8a8c4",
"text": "In 2008 Bitcoin [Nak09] was introduced as the first decentralized digital currency. Its core underlying technology is the blockchain which is essentially a distributed append-only database. In particular, blockchain solves the key issue in decentralized digital currencies – the double spending problem – which asks: “if there is no central authority, what stops a malicious party from spending the same unit of currency multiple times”. Blockchain solves this problem by keeping track of each transaction that has been ever made while being robust against adversarial modifications.",
"title": ""
},
{
"docid": "96aa1f19a00226af7b5bbe0bb080582e",
"text": "CONTEXT\nComprehensive discharge planning by advanced practice nurses has demonstrated short-term reductions in readmissions of elderly patients, but the benefits of more intensive follow-up of hospitalized elders at risk for poor outcomes after discharge has not been studied.\n\n\nOBJECTIVE\nTo examine the effectiveness of an advanced practice nurse-centered discharge planning and home follow-up intervention for elders at risk for hospital readmissions.\n\n\nDESIGN\nRandomized clinical trial with follow-up at 2, 6, 12, and 24 weeks after index hospital discharge.\n\n\nSETTING\nTwo urban, academically affiliated hospitals in Philadelphia, Pa.\n\n\nPARTICIPANTS\nEligible patients were 65 years or older, hospitalized between August 1992 and March 1996, and had 1 of several medical and surgical reasons for admission.\n\n\nINTERVENTION\nIntervention group patients received a comprehensive discharge planning and home follow-up protocol designed specifically for elders at risk for poor outcomes after discharge and implemented by advanced practice nurses.\n\n\nMAIN OUTCOME MEASURES\nReadmissions, time to first readmission, acute care visits after discharge, costs, functional status, depression, and patient satisfaction.\n\n\nRESULTS\nA total of 363 patients (186 in the control group and 177 in the intervention group) were enrolled in the study; 70% of intervention and 74% of control subjects completed the trial. Mean age of sample was 75 years; 50% were men and 45% were black. By week 24 after the index hospital discharge, control group patients were more likely than intervention group patients to be readmitted at least once (37.1 % vs 20.3 %; P<.001). Fewer intervention group patients had multiple readmissions (6.2% vs 14.5%; P = .01) and the intervention group had fewer hospital days per patient (1.53 vs 4.09 days; P<.001). Time to first readmission was increased in the intervention group (P<.001). At 24 weeks after discharge, total Medicare reimbursements for health services were about $1.2 million in the control group vs about $0.6 million in the intervention group (P<.001). There were no significant group differences in post-discharge acute care visits, functional status, depression, or patient satisfaction.\n\n\nCONCLUSIONS\nAn advanced practice nurse-centered discharge planning and home care intervention for at-risk hospitalized elders reduced readmissions, lengthened the time between discharge and readmission, and decreased the costs of providing health care. Thus, the intervention demonstrated great potential in promoting positive outcomes for hospitalized elders at high risk for rehospitalization while reducing costs.",
"title": ""
},
{
"docid": "5601a0da8cfaf42d30b139c535ae37db",
"text": "This article presents some key achievements and recommendations from the IoT6 European research project on IPv6 exploitation for the Internet of Things (IoT). It highlights the potential of IPv6 to support the integration of a global IoT deployment including legacy systems by overcoming horizontal fragmentation as well as more direct vertical integration between communicating devices and the cloud.",
"title": ""
},
{
"docid": "cc57e42da57af33edc53ba64f33e0178",
"text": "This paper focuses on the design and development of a low-cost QFN package that is based on wirebond interconnects. One of the design goals is to extend the frequency at which the package can be used to 40-50 GHz (above the K band), in the millimeter-wave range. Owing to the use of mass production assembly protocols and materials, such as commercially available QFN in a mold compound, the design that is outlined in this paper significantly reduces the cost of assembly of millimeter wave modules. To operate the package at 50 GHz or a higher frequency, several key design features are proposed. They include the use of through vias (backside vias) and ground bondwires to provide ground return currents. This paper also provides rigorous validation steps that we took to obtain the key high frequency characteristics. Since a molding compound is used in conventional QFN packages, the material and its effectiveness in determining the signal propagation have to be incorporated in the overall design. However, the mold compound creates some extra challenges in the de-embedding task. For example, the mold compound must be removed to expose the probing pads so the effect of the microstrip on the GaAs chip can be obtained and de-embedded. Careful simulation and experimental validation reveal that the proposed QFN design achieves a return loss of -10 dB and an insertion loss of -1.5 dB up to 50 GHz.",
"title": ""
},
{
"docid": "26c58183e71f916f37d67f1cf848f021",
"text": "With the increasing popularity of herbomineral preparations in healthcare, a new proprietary herbomineral formulation was formulated with ashwagandha root extract and three minerals viz. zinc, magnesium, and selenium. The aim of the study was to evaluate the immunomodulatory potential of Biofield Energy Healing (The Trivedi Effect ® ) on the herbomineral formulation using murine splenocyte cells. The test formulation was divided into two parts. One was the control without the Biofield Energy Treatment. The other part was labelled the Biofield Energy Treated sample, which received the Biofield Energy Healing Treatment remotely by twenty renowned Biofield Energy Healers. Through MTT assay, all the test formulation concentrations from 0.00001053 to 10.53 μg/mL were found to be safe with cell viability ranging from 102.61% to 194.57% using splenocyte cells. The Biofield Treated test formulation showed a significant (p≤0.01) inhibition of TNF-α expression by 15.87%, 20.64%, 18.65%, and 20.34% at 0.00001053, 0.0001053, 0.01053, and 0.1053, μg/mL, respectively as compared to the vehicle control (VC) group. The level of TNF-α was reduced by 8.73%, 19.54%, and 14.19% at 0.001053, 0.01053, and 0.1053 μg/mL, respectively in the Biofield Treated test formulation compared to the untreated test formulation. The expression of IL-1β reduced by 22.08%, 23.69%, 23.00%, 16.33%, 25.76%, 16.10%, and 23.69% at 0.00001053, 0.0001053, 0.001053, 0.01053, 0.1053, 1.053 and 10.53 μg/mL, respectively compared to the VC. Additionally, the expression of MIP-1α significantly (p≤0.001) reduced by 13.35%, 22.96%, 25.11%, 22.71%, and 21.83% at 0.00001053, 0.0001053, 0.01053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation significantly down-regulated the MIP-1α expression by 10.75%, 9.53%, 9.57%, and 10.87% at 0.00001053, 0.01053, 0.1053 and 1.053 μg/mL, respectively compared to the untreated test formulation. The results showed the IFN-γ expression was also significantly (p≤0.001) reduced by 39.16%, 40.34%, 27.57%, 26.06%, 42.53%, and 48.91% at 0.0001053, 0.001053, 0.01053, 0.1053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation showed better suppression of IFN-γ expression by 15.46%, 13.78%, International Journal of Biomedical Engineering and Clinical Science 2016; 2(1): 8-17 9 17.14%, and 13.11% at concentrations 0.001053, 0.01053, 0.1053, and 10.53 μg/mL, respectively compared to the untreated test formulation. Overall, the results demonstrated that The Trivedi Effect ® Biofield Energy Healing (TEBEH) has the capacity to potentiate the immunomodulatory and anti-inflammatory activity of the test formulation. Biofield Energy may also be useful in organ transplants, anti-aging, and stress management by improving overall health and quality of life.",
"title": ""
},
{
"docid": "172561db4f6d4bfe2b15c8d26adc3d91",
"text": "\"Big Data\" in map-reduce (M-R) clusters is often fundamentally temporal in nature, as are many analytics tasks over such data. For instance, display advertising uses Behavioral Targeting (BT) to select ads for users based on prior searches, page views, etc. Previous work on BT has focused on techniques that scale well for offline data using M-R. However, this approach has limitations for BT-style applications that deal with temporal data: (1) many queries are temporal and not easily expressible in M-R, and moreover, the set-oriented nature of M-R front-ends such as SCOPE is not suitable for temporal processing, (2) as commercial systems mature, they may need to also directly analyze and react to real-time data feeds since a high turnaround time can result in missed opportunities, but it is difficult for current solutions to naturally also operate over real-time streams. Our contributions are twofold. First, we propose a novel framework called TiMR (pronounced timer), that combines a time-oriented data processing system with a M-R framework. Users write and submit analysis algorithms as temporal queries - these queries are succinct, scale-out-agnostic, and easy to write. They scale well on large-scale offline data using TiMR, and can work unmodified over real-time streams. We also propose new cost-based query fragmentation and temporal partitioning schemes for improving efficiency with TiMR. Second, we show the feasibility of this approach for BT, with new temporal algorithms that exploit new targeting opportunities. Experiments using real data from a commercial ad platform show that TiMR is very efficient and incurs orders-of-magnitude lower development effort. Our BT solution is easy and succinct, and performs up to several times better than current schemes in terms of memory, learning time, and click-through-rate/coverage.",
"title": ""
},
{
"docid": "3d81cdfc3d9266d08dc6c28099397668",
"text": "We address the problem of predicting new drug-target interactions from three inputs: known interactions, similarities over drugs and those over targets. This setting has been considered by many methods, which however have a common problem of allowing to have only one similarity matrix over drugs and that over targets. The key idea of our approach is to use more than one similarity matrices over drugs as well as those over targets, where weights over the multiple similarity matrices are estimated from data to automatically select similarities, which are effective for improving the performance of predicting drug-target interactions. We propose a factor model, named Multiple Similarities Collaborative Matrix Factorization(MSCMF), which projects drugs and targets into a common low-rank feature space, which is further consistent with weighted similarity matrices over drugs and those over targets. These two low-rank matrices and weights over similarity matrices are estimated by an alternating least squares algorithm. Our approach allows to predict drug-target interactions by the two low-rank matrices collaboratively and to detect similarities which are important for predicting drug-target interactions. This approach is general and applicable to any binary relations with similarities over elements, being found in many applications, such as recommender systems. In fact, MSCMF is an extension of weighted low-rank approximation for one-class collaborative filtering. We extensively evaluated the performance of MSCMF by using both synthetic and real datasets. Experimental results showed nice properties of MSCMF on selecting similarities useful in improving the predictive performance and the performance advantage of MSCMF over six state-of-the-art methods for predicting drug-target interactions.",
"title": ""
},
{
"docid": "8dcfd08d5684ec9fd7d5a438a8086f23",
"text": "We consider the problem of predicting semantic segmentation of future frames in a video. Given several observed frames in a video, our goal is to predict the semantic segmentation map of future frames that are not yet observed. A reliable solution to this problem is useful in many applications that require real-time decision making, such as autonomous driving. We propose a novel model that uses convolutional LSTM (ConvLSTM) to encode the spatiotemporal information of observed frames for future prediction. We also extend our model to use bidirectional ConvLSTM to capture temporal information in both directions. Our proposed approach outperforms other state-of-the-art methods on the benchmark dataset.",
"title": ""
},
{
"docid": "63e3be30835fd8f544adbff7f23e13ab",
"text": "Deaths due to plastic bag suffocation or plastic bag asphyxia are not reported in Malaysia. In the West many suicides by plastic bag asphyxia, particularly in the elderly and those who are chronically and terminally ill, have been reported. Accidental deaths too are not uncommon in the West, both among small children who play with shopping bags and adolescents who are solvent abusers. Another well-known but not so common form of accidental death from plastic bag asphyxia is sexual asphyxia, which is mostly seen among adult males. Homicide by plastic bag asphyxia too is reported in the West and the victims are invariably infants or adults who are frail or terminally ill and who cannot struggle. Two deaths due to plastic bag asphyxia are presented. Both the autopsies were performed at the University Hospital Mortuary, Kuala Lumpur. Both victims were 50-year old married Chinese males. One death was diagnosed as suicide and the other as sexual asphyxia. Sexual asphyxia is generally believed to be a problem associated exclusively with the West. Specific autopsy findings are often absent in deaths due to plastic bag asphyxia and therefore such deaths could be missed when some interested parties have altered the scene and most importantly have removed the plastic bag. A visit to the scene of death is invariably useful.",
"title": ""
},
{
"docid": "b226b612db064f720e32e5a7fd9d9dec",
"text": "Clustering is a fundamental technique widely used for exploring the inherent data structure in pattern recognition and machine learning. Most of the existing methods focus on modeling the similarity/dissimilarity relationship among instances, such as k-means and spectral clustering, and ignore to extract more effective representation for clustering. In this paper, we propose a deep embedding network for representation learning, which is more beneficial for clustering by considering two constraints on learned representations. We first utilize a deep auto encoder to learn the reduced representations from the raw data. To make the learned representations suitable for clustering, we first impose a locality-persevering constraint on the learned representations, which aims to embed original data into its underlying manifold space. Then, different from spectral clustering which extracts representations from the block diagonal similarity matrix, we apply a group sparsity constraint for the learned representations, and aim to learn block diagonal representations in which the nonzero groups correspond to its cluster. After obtaining the learned representations, we use k-means to cluster them. To evaluate the proposed deep embedding network, we compare its performance with k-means and spectral clustering on three commonly-used datasets. The experiments demonstrate that the proposed method achieves promising performance.",
"title": ""
},
{
"docid": "ff20e5cd554cd628eba07776fa9a5853",
"text": "We describe our early experience in applying our console log mining techniques [19, 20] to logs from production Google systems with thousands of nodes. This data set is five orders of magnitude in size and contains almost 20 times as many messages types as the Hadoop data set we used in [19]. It also has many properties that are unique to large scale production deployments (e.g., the system stays on for several months and multiple versions of the software can run concurrently). Our early experience shows that our techniques, including source code based log parsing, state and sequence based feature creation and problem detection, work well on this production data set. We also discuss our experience in using our log parser to assist the log sanitization.",
"title": ""
},
{
"docid": "ae85cf24c079ff446b76f0ba81146369",
"text": "Subgraph Isomorphism is a fundamental problem in graph data processing. Most existing subgraph isomorphism algorithms are based on a backtracking framework which computes the solutions by incrementally matching all query vertices to candidate data vertices. However, we observe that extensive duplicate computation exists in these algorithms, and such duplicate computation can be avoided by exploiting relationships between data vertices. Motivated by this, we propose a novel approach, BoostIso, to reduce duplicate computation. Our extensive experiments with real datasets show that, after integrating our approach, most existing subgraph isomorphism algorithms can be speeded up significantly, especially for some graphs with intensive vertex relationships, where the improvement can be up to several orders of magnitude.",
"title": ""
},
{
"docid": "ec19face14810817bfd824d70a11c746",
"text": "The article deals with various ways of memristor modeling and simulation in the MATLAB&Simulink environment. Recently used and published mathematical memristor model serves as a base, regarding all known features of its behavior. Three different approaches in the MATLAB&Simulink system are used for the differential and other equations formulation. The first one employs the standard system core offer for the Ordinary Differential Equations solutions (ODE) in the form of an m-file. The second approach is the model construction in Simulink environment. The third approach employs so-called physical modeling using the built-in Simscape system. The output data are the basic memristor characteristics and appropriate time courses. The features of all models are discussed, especially regarding the computer simulation. Possible problems that may occur during modeling are pointed. Key-Words: memristor, modeling and simulation, MATLAB, Simulink, Simscape, physical model",
"title": ""
},
{
"docid": "51ef96b352d36f5ab933c10184bb385b",
"text": "We present a language agnostic, unsupervised method for inducing morphological transformations between words. The method relies on certain regularities manifest in highdimensional vector spaces. We show that this method is capable of discovering a wide range of morphological rules, which in turn are used to build morphological analyzers. We evaluate this method across six different languages and nine datasets, and show significant improvements across all languages.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.